971 resultados para non-ideal problems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A study was made of the effect of blending practice upon selected physical properties of crude oils, and of various base oils and petroleum products, using a range of binary mixtures. The crudes comprised light, medium and heavy Kuwait crude oils. The properties included kinematic viscosity, pour point, boiling point and Reid vapour pressure. The literature related to the prediction of these properties, and the changes reported to occur on blending, was critically reviewed as a preliminary to the study. The kinematic viscosity of petroleum oils in general exhibited non-ideal behaviour upon blending. A mechanism was proposed for this behaviour which took into account the effect of asphaltenes content. A correlation was developed, as a modification of Grunberg's equation, to predict the viscosities of binary mixtures of petroleum oils. A correlation was also developed to predict the viscosities of ternary mixtures. This correlation showed better agreement with experimental data (< 6% deviation for crude oils and 2.0% for base oils) than currently-used methods, i.e. ASTM and Refutas methods. An investigation was made of the effect of temperature on the viscosities of crude oils and petroleum products at atmospheric pressure. The effect of pressure on the viscosity of crude oil was also studied. A correlation was developed to predict the viscosity at high pressures (up to 8000 psi), which gave significantly better agreement with the experimental data than the current method due to Kouzel (5.2% and 6.0% deviation for the binary and ternary mixtures respectively). Eyring's theory of viscous flow was critically investigated, and a modification was proposed which extends its application to petroleum oils. The effect of blending on the pour points of selected petroleum oils was studied together with the effect of wax formation and asphaltenes content. Depression of the pour point was always obtained with crude oil binary mixtures. A mechanism was proposed to explain the pour point behaviour of the different binary mixtures. The effects of blending on the boiling point ranges and Reid vapour pressures of binary mixtures of petroleum oils were investigated. The boiling point range exhibited ideal behaviour but the R.V.P. showed negative deviations from it in all cases. Molecular weights of these mixtures were ideal, but the densities and molar volumes were not. The stability of the various crude oil binary mixtures, in terms of viscosity, was studied over a temperature range of 1oC - 30oC for up to 12 weeks. Good stability was found in most cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The theory of vapour-liquid equilibria is reviewed, as is the present status or prediction methods in this field. After discussion of the experimental methods available, development of a recirculating equilibrium still based on a previously successful design (the modified Raal, Code and Best still of O'Donnell and Jenkins) is described. This novel still is designed to work at pressures up to 35 bar and for the measurement of both isothermal and isobaric vapour-liquid equilibrium data. The equilibrium still was first commissioned by measuring the saturated vapour pressures of pure ethanol and cyclohexane in the temperature range 77-124°C and 80-142°C respectively. The data obtained were compared with available literature experimental values and with values derived from an extended form of the Antoine equation for which parameters were given in the literature. Commissioning continued with the study of the phase behaviour of mixtures of the two pure components as such mixtures are strongly non-ideal, showing azeotopic behaviour. Existing data did not exist above one atmosphere pressure. Isothermal measurements were made at 83.29°C and 106.54°C, whilst isobaric measurements were made at pressures of 1 bar, 3 bar and 5 bar respectively. The experimental vapour-liquid equilibrium data obtained are assessed by a standard literature method incorporating a themodynamic consistency test that minimises the errors in all the measured variables. This assessment showed that reasonable x-P-T data-sets had been measured, from which y-values could be deduced, but that the experimental y-values indicated the need for improvements in the design of the still. The final discussion sets out the improvements required and outlines how they might be attained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have simulated the performance of various apertures used in Coded Aperture Imaging - optically. Coded pictures of extended and continuous-tone planar objects from the Annulus, Twin Annulus, Fresnel Zone Plate and the Uniformly Redundant Array have been decoded using a noncoherent correlation process. We have compared the tomographic capabilities of the Twin Annulus with the Uniformly Redundant Arrays based on quadratic residues and m-sequences. We discuss the ways of reducing the 'd. c.' background of the various apertures used. The non-ideal System-Point-Spread-Function inherent in a noncoherent optical correlation process produces artifacts in the reconstruction. Artifacts are also introduced as a result of unwanted cross-correlation terms from out-of-focus planes. We find that the URN based on m-sequences exhibits good spatial resolution and out-of-focus behaviour when imaging extended objects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Horizontal Subsurface Flow Treatment Wetlands (HSSF TWs) are used by Severn Trent Water as a low-cost tertiary wastewater treatment for rural locations. Experience has shown that clogging is a major operational problem that reduces HSSF TW lifetime. Clogging is caused by an accumulation of secondary wastewater solids from upstream processes and decomposing leaf litter. Clogging occurs as a sludge layer where wastewater is loaded on the surface of the bed at the inlet. Severn Trent systems receive relatively high hydraulic loading rates, which causes overland flow and reduces the ability to mineralise surface sludge accumulations. A novel apparatus and method, the Aston Permeameter, was created to measure hydraulic conductivity in situ. Accuracy is ±30 %, which was considered adequate given that conductivity in clogged systems varies by several orders of magnitude. The Aston Permeameter was used to perform 20 separate tests on 13 different HSSF TWs in the UK and the US. The minimum conductivity measured was 0.03 m/d at Fenny Compton (compared with 5,000 m/d clean conductivity), which was caused by an accumulation of construction fines in one part of the bed. Most systems displayed a 2 to 3 order of magnitude variation in conductivity in each dimension. Statistically significant transverse variations in conductivity were found in 70% of the systems. Clogging at the inlet and outlet was generally highest where flow enters the influent distribution and exits the effluent collection system, respectively. Surface conductivity was lower in systems with dense vegetation because plant canopies reduce surface evapotranspiration and decelerate sludge mineralisation. An equation was derived to describe how the water table profile is influenced by overland flow, spatial variations in conductivity and clogging. The equation is calibrated using a single parameter, the Clog Factor (CF), which represents the equivalent loss of porosity that would reproduce measured conductivity according to the Kozeny-Carman Equation. The CF varies from 0 for ideal conditions to 1 for completely clogged conditions. Minimum CF was 0.54 for a system that had recently been refurbished, which represents the deviation from ideal conditions due to characteristics of non-ideal media such as particle size distribution and morphology. Maximum CF was 0.90 for a 15 year old system that exhibited sludge accumulation and overland flow across the majority of the bed. A Finite Element Model of a 15 m long HSSF TW was used to indicate how hydraulics and hydrodynamics vary as CF increases. It was found that as CF increases from 0.55 to 0.65 the subsurface wetted area increases, which causes mean hydraulic residence time to increase from 0.16 days to 0.18 days. As CF increases from 0.65 to 0.90, the extent of overland flow increases from 1.8 m to 13.1 m, which reduces hydraulic efficiency from 37 % to 12 % and reduces mean residence time to 0.08 days.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A multistage distillation column in which mass transfer and a reversible chemical reaction occurred simultaneously, has been investigated to formulate a technique by which this process can be analysed or predicted. A transesterification reaction between ethyl alcohol and butyl acetate, catalysed by concentrated sulphuric acid, was selected for the investigation and all the components were analysed on a gas liquid chromatograph. The transesterification reaction kinetics have been studied in a batch reactor for catalyst concentrations of 0.1 - 1.0 weight percent and temperatures between 21.4 and 85.0 °C. The reaction was found to be second order and dependent on the catalyst concentration at a given temperature. The vapour liquid equilibrium data for six binary, four ternary and one quaternary systems are measured at atmospheric pressure using a modified Cathala dynamic equilibrium still. The systems with the exception of ethyl alcohol - butyl alcohol mixtures, were found to be non-ideal. Multicomponent vapour liquid equilibrium compositions were predicted by a computer programme which utilised the Van Laar constants obtained from the binary data sets. Good agreement was obtained between the predicted and experimental quaternary equilibrium vapour compositions. Continuous transesterification experiments were carried out in a six stage sieve plate distillation column. The column was 3" in internal diameter and of unit construction in glass. The plates were 8" apart and had a free area of 7.7%. Both the liquid and vapour streams were analysed. The component conversion was dependent on the boilup rate and the reflux ratio. Because of the presence of the reaction, the concentration of one of the lighter components increased below the feed plate. In the same region a highly developed foam was formed due to the presence of the catalyst. The experimental results were analysed by the solution of a series of simultaneous enthalpy and mass equations. Good agreement was obtained between the experimental and calculated results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation discussed resource allocation mechanisms in several network topologies including infrastructure wireless network, non-infrastructure wireless network and wire-cum-wireless network. Different networks may have different resource constrains. Based on actual technologies and implementation models, utility function, game theory and a modern control algorithm have been introduced to balance power, bandwidth and customers' satisfaction in the system. ^ In infrastructure wireless networks, utility function was used in the Third Generation (3G) cellular network and the network was trying to maximize the total utility. In this dissertation, revenue maximization was set as an objective. Compared with the previous work on utility maximization, it is more practical to implement revenue maximization by the cellular network operators. The pricing strategies were studied and the algorithms were given to find the optimal price combination of power and rate to maximize the profit without degrading the Quality of Service (QoS) performance. ^ In non-infrastructure wireless networks, power capacity is limited by the small size of the nodes. In such a network, nodes need to transmit traffic not only for themselves but also for their neighbors, so power management become the most important issue for the network overall performance. Our innovative routing algorithm based on utility function, sets up a flexible framework for different users with different concerns in the same network. This algorithm allows users to make trade offs between multiple resource parameters. Its flexibility makes it a suitable solution for the large scale non-infrastructure network. This dissertation also covers non-cooperation problems. Through combining game theory and utility function, equilibrium points could be found among rational users which can enhance the cooperation in the network. ^ Finally, a wire-cum-wireless network architecture was introduced. This network architecture can support multiple services over multiple networks with smart resource allocation methods. Although a SONET-to-WiMAX case was used for the analysis, the mathematic procedure and resource allocation scheme could be universal solutions for all infrastructure, non-infrastructure and combined networks. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, novel analog-to-digital and digital-to-analog generalized time-interleaved variable bandpass sigma-delta modulators are designed, analysed, evaluated and implemented that are suitable for high performance data conversion for a broad-spectrum of applications. These generalized time-interleaved variable bandpass sigma-delta modulators can perform noise-shaping for any centre frequency from DC to Nyquist. The proposed topologies are well-suited for Butterworth, Chebyshev, inverse-Chebyshev and elliptical filters, where designers have the flexibility of specifying the centre frequency, bandwidth as well as the passband and stopband attenuation parameters. The application of the time-interleaving approach, in combination with these bandpass loop-filters, not only overcomes the limitations that are associated with conventional and mid-band resonator-based bandpass sigma-delta modulators, but also offers an elegant means to increase the conversion bandwidth, thereby relaxing the need to use faster or higher-order sigma-delta modulators. A step-by-step design technique has been developed for the design of time-interleaved variable bandpass sigma-delta modulators. Using this technique, an assortment of lower- and higher-order single- and multi-path generalized A/D variable bandpass sigma-delta modulators were designed, evaluated and compared in terms of their signal-to-noise ratios, hardware complexity, stability, tonality and sensitivity for ideal and non-ideal topologies. Extensive behavioural-level simulations verified that one of the proposed topologies not only used fewer coefficients but also exhibited greater robustness to non-idealties. Furthermore, second-, fourth- and sixth-order single- and multi-path digital variable bandpass digital sigma-delta modulators are designed using this technique. The mathematical modelling and evaluation of tones caused by the finite wordlengths of these digital multi-path sigmadelta modulators, when excited by sinusoidal input signals, are also derived from first principles and verified using simulation and experimental results. The fourth-order digital variable-band sigma-delta modulator topologies are implemented in VHDL and synthesized on Xilinx® SpartanTM-3 Development Kit using fixed-point arithmetic. Circuit outputs were taken via RS232 connection provided on the FPGA board and evaluated using MATLAB routines developed by the author. These routines included the decimation process as well. The experiments undertaken by the author further validated the design methodology presented in the work. In addition, a novel tunable and reconfigurable second-order variable bandpass sigma-delta modulator has been designed and evaluated at the behavioural-level. This topology offers a flexible set of choices for designers and can operate either in single- or dual-mode enabling multi-band implementations on a single digital variable bandpass sigma-delta modulator. This work is also supported by a novel user-friendly design and evaluation tool that has been developed in MATLAB/Simulink that can speed-up the design, evaluation and comparison of analog and digital single-stage and time-interleaved variable bandpass sigma-delta modulators. This tool enables the user to specify the conversion type, topology, loop-filter type, path number and oversampling ratio.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an analytical solution of a mixed boundary value problem for an unbounded 2D doubly periodic domain which is a model of a composite material with mixed imperfect interface conditions. We find the effective conductivity of the composite material with mixed imperfect interface conditions, and also give numerical analysis of several of their properties such as temperature and flux.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals -- The algorithm to define the dynamic threshold is a modification of a convex combination found in literature -- This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise -- The present work shows preliminary results over a database built with some political speech -- The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared -- Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Solar Intensity X-ray and particle Spectrometer (SIXS) on board BepiColombo's Mercury Planetary Orbiter (MPO) will study solar energetic particles moving towards Mercury and solar X-rays on the dayside of Mercury. The SIXS instrument consists of two detector sub-systems; X-ray detector SIXS-X and particle detector SIXS-P. The SIXS-P subdetector will detect solar energetic electrons and protons in a broad energy range using a particle telescope approach with five outer Si detectors around a central CsI(Tl) scintillator. The measurements made by the SIXS instrument are necessary for other instruments on board the spacecraft. SIXS data will be used to study the Solar X-ray corona, solar flares, solar energetic particles, the Hermean magnetosphere, and solar eruptions. The SIXS-P detector was calibrated by comparing experimental measurement data from the instrument with Geant4 simulation data. Calibration curves were produced for the different side detectors and the core scintillator for electrons and protons, respectively. The side detector energy response was found to be linear for both electrons and protons. The core scintillator energy response to protons was found to be non-linear. The core scintillator calibration for electrons was omitted due to insufficient experimental data. The electron and proton acceptance of the SIXS-P detector was determined with Geant4 simulations. Electron and proton energy channels are clean in the main energy range of the instrument. At higher energies, protons and electrons produce non-ideal response in the energy channels. Due to the limited bandwidth of the spacecraft's telemetry, the particle measurements made by SIXS-P have to be pre-processed in the data processing unit of the SIXS instrument. A lookup table was created for the pre-processing of data with Geant4 simulations, and the ability of the lookup table to provide spectral information from a simulated electron event was analysed. The lookup table produces clean electron and proton channels and is able to separate protons and electrons. Based on a simulated solar energetic electron event, the incident electron spectrum cannot be determined from channel particle counts with a standard analysis method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conventional Si complementary-metal-oxide-semiconductor (CMOS) scaling is fast approaching its limits. The extension of the logic device roadmap for future enhancements in transistor performance requires non-Si materials and new device architectures. III-V materials, due to their superior electron transport properties, are well poised to replace Si as the channel material beyond the 10nm technology node to mitigate the performance loss of Si transistors from further reductions in supply voltage to minimise power dissipation in logic circuits. However several key challenges, including a high quality dielectric/III-V gate stack, a low-resistance source/drain (S/D) technology, heterointegration onto a Si platform and a viable III-V p-metal-oxide-semiconductor field-effect-transistor (MOSFET), need to be addressed before III-Vs can be employed in CMOS. This Thesis specifically addressed the development and demonstration of planar III-V p-MOSFETs, to complement the n-MOSFET, thereby enabling an all III-V CMOS technology to be realised. This work explored the application of InGaAs and InGaSb material systems as the channel, in conjunction with Al2O3/metal gate stacks, for p-MOSFET development based on the buried-channel flatband device architecture. The body of work undertaken comprised material development, process module development and integration into a robust fabrication flow for the demonstration of p-channel devices. The parameter space in the design of the device layer structure, based around the III-V channel/barrier material options of Inx≥0.53Ga1-xAs/In0.52Al0.48As and Inx≥0.1Ga1-xSb/AlSb, was systematically examined to improve hole channel transport. A mobility of 433 cm2/Vs, the highest room temperature hole mobility of any InGaAs quantum-well channel reported to date, was obtained for the In0.85Ga0.15As (2.1% strain) structure. S/D ohmic contacts were developed based on thermally annealed Au/Zn/Au metallisation and validated using transmission line model test structures. The effects of metallisation thickness, diffusion barriers and de-oxidation conditions were examined. Contacts to InGaSb-channel structures were found to be sensitive to de-oxidation conditions. A fabrication process, based on a lithographically-aligned double ohmic patterning approach, was realised for deep submicron gate-to-source/drain gap (Lside) scaling to minimise the access resistance, thereby mitigating the effects of parasitic S/D series resistance on transistor performance. The developed process yielded gaps as small as 20nm. For high-k integration on GaSb, ex-situ ammonium sulphide ((NH4)2S) treatments, in the range 1%-22%, for 10min at 295K were systematically explored for improving the electrical properties of the Al2O3/GaSb interface. Electrical and physical characterisation indicated the 1% treatment to be most effective with interface trap densities in the range of 4 - 10×1012cm-2eV-1 in the lower half of the bandgap. An extended study, comprising additional immersion times at each sulphide concentration, was further undertaken to determine the surface roughness and the etching nature of the treatments on GaSb. A number of p-MOSFETs based on III-V-channels with the most promising hole transport and integration of the developed process modules were successfully demonstrated in this work. Although the non-inverted InGaAs-channel devices showed good current modulation and switch-off characteristics, several aspects of performance were non-ideal; depletion-mode operation, modest drive current (Id,sat=1.14mA/mm), double peaked transconductance (gm=1.06mS/mm), high subthreshold swing (SS=301mV/dec) and high on-resistance (Ron=845kΩ.μm). Despite demonstrating substantial improvement in the on-state metrics of Id,sat (11×), gm (5.5×) and Ron (5.6×), inverted devices did not switch-off. Scaling gate-to-source/drain gap (Lside) from 1μm down to 70nm improved Id,sat (72.4mA/mm) by a factor of 3.6 and gm (25.8mS/mm) by a factor of 4.1 in inverted InGaAs-channel devices. Well-controlled current modulation and good saturation behaviour was observed for InGaSb-channel devices. In the on-state In0.3Ga0.7Sb-channel (Id,sat=49.4mA/mm, gm=12.3mS/mm, Ron=31.7kΩ.μm) and In0.4Ga0.6Sb-channel (Id,sat=38mA/mm, gm=11.9mS/mm, Ron=73.5kΩ.μm) devices outperformed the InGaAs-channel devices. However the devices could not be switched off. These findings indicate that III-V p-MOSFETs based on InGaSb as opposed to InGaAs channels are more suited as the p-channel option for post-Si CMOS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.