907 resultados para High Lift Systems Design
Resumo:
The relentlessly increasing demand for network bandwidth, driven primarily by Internet-based services such as mobile computing, cloud storage and video-on-demand, calls for more efficient utilization of the available communication spectrum, as that afforded by the resurging DSP-powered coherent optical communications. Encoding information in the phase of the optical carrier, using multilevel phase modulationformats, and employing coherent detection at the receiver allows for enhanced spectral efficiency and thus enables increased network capacity. The distributed feedback semiconductor laser (DFB) has served as the near exclusive light source powering the fiber optic, long-haul network for over 30 years. The transition to coherent communication systems is pushing the DFB laser to the limits of its abilities. This is due to its limited temporal coherence that directly translates into the number of different phases that can be imparted to a single optical pulse and thus to the data capacity. Temporal coherence, most commonly quantified in the spectral linewidth Δν, is limited by phase noise, result of quantum-mandated spontaneous emission of photons due to random recombination of carriers in the active region of the laser.
In this work we develop a generically new type of semiconductor laser with the requisite coherence properties. We demonstrate electrically driven lasers characterized by a quantum noise-limited spectral linewidth as low as 18 kHz. This narrow linewidth is result of a fundamentally new laser design philosophy that separates the functions of photon generation and storage and is enabled by a hybrid Si/III-V integration platform. Photons generated in the active region of the III-V material are readily stored away in the low loss Si that hosts the bulk of the laser field, thereby enabling high-Q photon storage. The storage of a large number of coherent quanta acts as an optical flywheel, which by its inertia reduces the effect of the spontaneous emission-mandated phase perturbations on the laser field, while the enhanced photon lifetime effectively reduces the emission rate of incoherent quanta into the lasing mode. Narrow linewidths are obtained over a wavelength bandwidth spanning the entire optical communication C-band (1530-1575nm) at only a fraction of the input power required by conventional DFB lasers. The results presented in this thesis hold great promise for the large scale integration of lithographically tuned, high-coherence laser arrays for use in coherent communications, that will enable Tb/s-scale data capacities.
Resumo:
With the size of transistors approaching the sub-nanometer scale and Si-based photonics pinned at the micrometer scale due to the diffraction limit of light, we are unable to easily integrate the high transfer speeds of this comparably bulky technology with the increasingly smaller architecture of state-of-the-art processors. However, we find that we can bridge the gap between these two technologies by directly coupling electrons to photons through the use of dispersive metals in optics. Doing so allows us to access the surface electromagnetic wave excitations that arise at a metal/dielectric interface, a feature which both confines and enhances light in subwavelength dimensions - two promising characteristics for the development of integrated chip technology. This platform is known as plasmonics, and it allows us to design a broad range of complex metal/dielectric systems, all having different nanophotonic responses, but all originating from our ability to engineer the system surface plasmon resonances and interactions. In this thesis, we demonstrate how plasmonics can be used to develop coupled metal-dielectric systems to function as tunable plasmonic hole array color filters for CMOS image sensing, visible metamaterials composed of coupled negative-index plasmonic coaxial waveguides, and programmable plasmonic waveguide network systems to serve as color routers and logic devices at telecommunication wavelengths.
Resumo:
In the past many different methodologies have been devised to support software development and different sets of methodologies have been developed to support the analysis of software artefacts. We have identified this mismatch as one of the causes of the poor reliability of embedded systems software. The issue with software development styles is that they are ``analysis-agnostic.'' They do not try to structure the code in a way that lends itself to analysis. The analysis is usually applied post-mortem after the software was developed and it requires a large amount of effort. The issue with software analysis methodologies is that they do not exploit available information about the system being analyzed.
In this thesis we address the above issues by developing a new methodology, called "analysis-aware" design, that links software development styles with the capabilities of analysis tools. This methodology forms the basis of a framework for interactive software development. The framework consists of an executable specification language and a set of analysis tools based on static analysis, testing, and model checking. The language enforces an analysis-friendly code structure and offers primitives that allow users to implement their own testers and model checkers directly in the language. We introduce a new approach to static analysis that takes advantage of the capabilities of a rule-based engine. We have applied the analysis-aware methodology to the development of a smart home application.
Resumo:
The design, synthesis and magnetic characterization of thiophene-based models for the polaronic ferromagnet are described. Synthetic strategies employing Wittig and Suzuki coupling were employed to produce polymers with extended π-systems. Oxidative doping using AsF_5 or I_2 produces radical cations (polarons) that are stable at room temperature. Magnetic characterization of the doped polymers, using SQUID-based magnetometry, indicates that in several instances ferromagnetic coupling of polarons occurs along the polymer chain. An investigation of the influence of polaron stability and delocalization on the magnitude of ferromagnetic coupling is pursued. A lower limit for mild, solution phase I_2 doping is established. A comparison of the variable temperature data of various polymers reveals that deleterious antiferromagnetic interactions are relatively insensitive to spin concentration, doping protocols or spin state. Comparison of the various polymers reveals useful design principles and suggests new directions for the development of magnetic organic materials. Novel strategies for solubilizing neutral polymeric materials in polar solvents are investigated.
The incorporation of stable bipyridinium spin-containing units into a polymeric high-spin array is explored. Preliminary results suggest that substituted diquat derivatives may serve as stable spin-containing units for the polaronic ferromagnet and are amenable to electrochemical doping. Synthetic efforts to prepare high-spin polymeric materials using viologens as a spin source have been unsuccessful.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
Power system is at the brink of change. Engineering needs, economic forces and environmental factors are the main drivers of this change. The vision is to build a smart electrical grid and a smarter market mechanism around it to fulfill mandates on clean energy. Looking at engineering and economic issues in isolation is no longer an option today; it needs an integrated design approach. In this thesis, I shall revisit some of the classical questions on the engineering operation of power systems that deals with the nonconvexity of power flow equations. Then I shall explore some issues of the interaction of these power flow equations on the electricity markets to address the fundamental issue of market power in a deregulated market environment. Finally, motivated by the emergence of new storage technologies, I present an interesting result on the investment decision problem of placing storage over a power network. The goal of this study is to demonstrate that modern optimization and game theory can provide unique insights into this complex system. Some of the ideas carry over to applications beyond power systems.
Resumo:
This report is a product of close industry-academia collaboration between British Aerospace and the Cambridge Engineering Design Centre (EDC). British Aerospace designs and integrates some of the most complex systems in the world, and its expertise in this field has enabled the company to become the United Kingdom's largest exporter. However, to stay at the forefront of the highly competitive aerospace industry it is necessary to seek new ways to work more effectively and more efficiently. The Cambridge EDC has played a part in supporting these needs by providing access to the methods and tools that it has developed for improving the process of designing mechanical systems. The EDC has gained an international reputation for the quality of its work in this subject. Thus, the collaboration is between two organisations each of whom are leaders in their respective fields. The central aim of the project has been to demonstrate how a systematic design process can be applied to a real design task identified by industry. The task selected was the design of a flight refuelling probe which would enable a combat aircraft to refuel from a "flying tanker". However, the systematic approach, methods and tools described in this report are applicable to most engineering design tasks. The findings presented in this report provide a sound basis for comparing the recommended systematic design process with industrial practice. The results of this comparison would enable the company to define ways in which its existing design process can be improved. This research project has a high degree of industrial relevance. The value of the work may be judged in terms of the opportunities it opens up for positive changes to the company's engineering operations. Several members of the EDC have contributed to the project. These include Dr Lucienne Blessing, Dr Stuart Burgess, Dr Amaresh Chakrabarti, Major Mark Nowack, Aylmer Johnson and Dr Paul Weaver. At British Aerospace special thanks must go to Alan Dean and David Halliday for their interest and the support they have given. The project has been managed by Dr Nigel Upton of British Aerospace during a 3 year secondment to the EDC.
Resumo:
This work quantifies the nature of delays in genetic regulatory networks and their effect on system dynamics. It is known that a time lag can emerge from a sequence of biochemical reactions. Applying this modeling framework to the protein production processes, delay distributions are derived in a stochastic (probability density function) and deterministic setting (impulse function), whilst being shown to be equivalent under different assumptions. The dependence of the distribution properties on rate constants, gene length, and time-varying temperatures is investigated. Overall, the distribution of the delay in the context of protein production processes is shown to be highly dependent on the size of the genes and mRNA strands as well as the reaction rates. Results suggest longer genes have delay distributions with a smaller relative variance, and hence, less uncertainty in the completion times, however, they lead to larger delays. On the other hand large uncertainties may actually play a positive role, as broader distributions can lead to larger stability regions when this formalization of the protein production delays is incorporated into a feedback system.
Furthermore, evidence suggests that delays may play a role as an explicit design into existing controlling mechanisms. Accordingly, the reccurring dual-feedback motif is also investigated with delays incorporated into the feedback channels. The dual-delayed feedback is shown to have stabilizing effects through a control theoretic approach. Lastly, a distributed delay based controller design method is proposed as a potential design tool. In a preliminary study, the dual-delayed feedback system re-emerges as an effective controller design.
Resumo:
Automatic recording instruments provide the ideal means of recording the responses of rivers, lakes and reservoirs to short-term changes in the weather. As part of the project ‘Using Automatic Monitoring and Dynamic Modelling for the Active Management of Lakes and Reservoirs', a family of three automatic monitoring stations were designed by engineers at the Centre for Ecology and Hydrology in Windermere to monitor such responses. In this article, the authors describe this instrument network in some detail and present case studies that illustrate the value of high resolution automatic monitoring in both catchment and reservoir applications.
Resumo:
Many applications in cosmology and astrophysics at millimeter wavelengths including CMB polarization, studies of galaxy clusters using the Sunyaev-Zeldovich effect (SZE), and studies of star formation at high redshift and in our local universe and our galaxy, require large-format arrays of millimeter-wave detectors. Feedhorn and phased-array antenna architectures for receiving mm-wave light present numerous advantages for control of systematics, for simultaneous coverage of both polarizations and/or multiple spectral bands, and for preserving the coherent nature of the incoming light. This enables the application of many traditional "RF" structures such as hybrids, switches, and lumped-element or microstrip band-defining filters.
Simultaneously, kinetic inductance detectors (KIDs) using high-resistivity materials like titanium nitride are an attractive sensor option for large-format arrays because they are highly multiplexable and because they can have sensitivities reaching the condition of background-limited detection. A KID is a LC resonator. Its inductance includes the geometric inductance and kinetic inductance of the inductor in the superconducting phase. A photon absorbed by the superconductor breaks a Cooper pair into normal-state electrons and perturbs its kinetic inductance, rendering it a detector of light. The responsivity of KID is given by the fractional frequency shift of the LC resonator per unit optical power.
However, coupling these types of optical reception elements to KIDs is a challenge because of the impedance mismatch between the microstrip transmission line exiting these architectures and the high resistivity of titanium nitride. Mitigating direct absorption of light through free space coupling to the inductor of KID is another challenge. We present a detailed titanium nitride KID design that addresses these challenges. The KID inductor is capacitively coupled to the microstrip in such a way as to form a lossy termination without creating an impedance mismatch. A parallel plate capacitor design mitigates direct absorption, uses hydrogenated amorphous silicon, and yields acceptable noise. We show that the optimized design can yield expected sensitivities very close to the fundamental limit for a long wavelength imager (LWCam) that covers six spectral bands from 90 to 400 GHz for SZE studies.
Excess phase (frequency) noise has been observed in KID and is very likely caused by two-level systems (TLS) in dielectric materials. The TLS hypothesis is supported by the measured dependence of the noise on resonator internal power and temperature. However, there is still a lack of a unified microscopic theory which can quantitatively model the properties of the TLS noise. In this thesis we derive the noise power spectral density due to the coupling of TLS with phonon bath based on an existing model and compare the theoretical predictions about power and temperature dependences with experimental data. We discuss the limitation of such a model and propose the direction for future study.