973 resultados para Optimization techniques
Resumo:
It has become clear that many organisms possess the ability to regulate their mutation rate in response to environmental conditions. So the question of finding an optimal mutation rate must be replaced by that of finding an optimal mutation schedule. We show that this task cannot be accomplished with standard population-dynamic models. We then develop a "hybrid" model for populations experiencing time-dependent mutation that treats population growth as deterministic but the time of first appearance of new variants as stochastic. We show that the hybrid model agrees well with a Monte Carlo simulation. From this model, we derive a deterministic approximation, a "threshold" model, that is similar to standard population dynamic models but differs in the initial rate of generation of new mutants. We use these techniques to model antibody affinity maturation by somatic hypermutation. We had previously shown that the optimal mutation schedule for the deterministic threshold model is phasic, with periods of mutation between intervals of mutation-free growth. To establish the validity of this schedule, we now show that the phasic schedule that optimizes the deterministic threshold model significantly improves upon the best constant-rate schedule for the hybrid and Monte Carlo models.
Resumo:
Microalgae have many applications, such as biodiesel production or food supplement. Depending on the application, the optimization of certain fractions of the biochemical composition (proteins, carbohydrates and lipids) is required. Therefore, samples obtained in different culture conditions must be analyzed in order to compare the content of such fractions. Nevertheless, traditional methods necessitate lengthy analytical procedures with prolonged sample turn-around times. Results of the biochemical composition of Nannochloropsis oculata samples with different protein, carbohydrate and lipid contents obtained by conventional analytical methods have been compared to those obtained by thermogravimetry (TGA) and a Pyroprobe device connected to a gas chromatograph with mass spectrometer detector (Py–GC/MS), showing a clear correlation. These results suggest a potential applicability of these techniques as fast and easy methods to qualitatively compare the biochemical composition of microalgal samples.
Resumo:
The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Federal Highway Administration, Office of Research, Washington, D.C.
Resumo:
Optical coherence tomography (OCT) is an emerging coherence-domain technique capable of in vivo imaging of sub-surface structures at millimeter-scale depth. Its steady progress over the last decade has been galvanized by a breakthrough detection concept, termed spectral-domain OCT, which has resulted in a dramatic improvement of the OCT signal-to-noise ratio of 150 times demonstrated for weakly scattering objects at video-frame-rates. As we have realized, however, an important OCT sub-system remains sub-optimal: the sample arm traditionally operates serially, i.e. in flying-spot mode. To realize the full-field image acquisition, a Fourier holography system illuminated with a swept-source is employed instead of a Michelson interferometer commonly used in OCT. The proposed technique, termed Fourier-domain OCT, offers a new leap in signal-to-noise ratio improvement, as compared to flying-spot OCT systems, and represents the main thrust of this paper. Fourier-domain OCT is described, and its basic theoretical aspects, including the reconstruction algorithm, are discussed. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Multiresolution (or multi-scale) techniques make it possible for Web-based GIS applications to access large dataset. The performance of such systems relies on data transmission over network and multiresolution query processing. In the literature the latter has received little research attention so far, and the existing methods are not capable of processing large dataset. In this paper, we aim to improve multiresolution query processing in an online environment. A cost model for such query is proposed first, followed by three strategies for its optimization. Significant theoretical improvement can be observed when comparing against available methods. Application of these strategies is also discussed, and similar performance enhancement can be expected if implemented in online GIS applications.
Resumo:
In this thesis, we consider four different scenarios of interest in modern satellite communications. For each scenario, we will propose the use of advanced solutions aimed at increasing the spectral efficiency of the communication links. First, we will investigate the optimization of the current standard for digital video broadcasting. We will increase the symbol rate of the signal and determine the optimal signal bandwidth. We will apply the time packing technique and propose a specifically design constellation. We will then compare some receiver architectures with different performance and complexity. The second scenario still addresses broadcast transmissions, but in a network composed of two satellites. We will compare three alternative transceiver strategies, namely, signals completely overlapped in frequency, frequency division multiplexing, and the Alamouti space-time block code, and, for each technique, we will derive theoretical results on the achievable rates. We will also evaluate the performance of said techniques in three different channel models. The third scenario deals with the application of multiuser detection in multibeam satellite systems. We will analyze a case in which the users are near the edge of the coverage area and, hence, they experience a high level of interference from adjacent cells. Also in this case, three different approaches will be compared. A classical approach in which each beam carries information for a user, a cooperative solution based on time division multiplexing, and the Alamouti scheme. The information theoretical analysis will be followed by the study of practical coded schemes. We will show that the theoretical bounds can be approached by a properly designed code or bit mapping. Finally, we will consider an Earth observation scenario, in which data is generated on the satellite and then transmitted to the ground. We will study two channel models, taking into account one or two transmit antennas, and apply techniques such as time and frequency packing, signal predistortion, multiuser detection and the Alamouti scheme.
Resumo:
Protein crystallization has gained a new strategic and commercial relevance in the postgenomic era due to its pivotal role in structural genomics. Producing high quality crystals has always been a bottleneck to efficient structure determination, and this problem is becoming increasingly acute. This is especially true for challenging, therapeutically important proteins that typically do not form suitable crystals. The OptiCryst consortium has focused on relieving this bottleneck by making a concerted effort to improve the crystallization techniques usually employed, designing new crystallization tools, and applying such developments to the optimization of target protein crystals. In particular, the focus has been on the novel application of dual polarization interferometry (DPI) to detect suitable nucleation; the application of in situ dynamic light scattering (DLS) to monitor and analyze the process of crystallization; the use of UV-fluorescence to differentiate protein crystals from salt; the design of novel nucleants and seeding technologies; and the development of kits for capillary counterdiffusion and crystal growth in gels. The consortium collectively handled 60 new target proteins that had not been crystallized previously. From these, we generated 39 crystals with improved diffraction properties. Fourteen of these 39 were only obtainable using OptiCryst methods. For the remaining 25, OptiCryst methods were used in combination with standard crystallization techniques. Eighteen structures have already been solved (30% success rate), with several more in the pipeline.
Resumo:
AMS subject classification: 90B60, 90B50, 90A80.
Resumo:
Bus stops are key links in the journeys of transit patrons with disabilities. Inaccessible bus stops prevent people with disabilities from using fixed-route bus services, thus limiting their mobility. The Americans with Disabilities Act (ADA) of 1990 prescribes the minimum requirements for bus stop accessibility by riders with disabilities. Due to limited budgets, transit agencies can only select a limited number of bus stop locations for ADA improvements annually. These locations should preferably be selected such that they maximize the overall benefits to patrons with disabilities. In addition, transit agencies may also choose to implement the universal design paradigm, which involves higher design standards than current ADA requirements and can provide amenities that are useful for all riders, like shelters and lighting. Many factors can affect the decision to improve a bus stop, including rider-based aspects like the number of riders with disabilities, total ridership, customer complaints, accidents, deployment costs, as well as locational aspects like the location of employment centers, schools, shopping areas, and so on. These interlacing factors make it difficult to identify optimum improvement locations without the aid of an optimization model. This dissertation proposes two integer programming models to help identify a priority list of bus stops for accessibility improvements. The first is a binary integer programming model designed to identify bus stops that need improvements to meet the minimum ADA requirements. The second involves a multi-objective nonlinear mixed integer programming model that attempts to achieve an optimal compromise among the two accessibility design standards. Geographic Information System (GIS) techniques were used extensively to both prepare the model input and examine the model output. An analytic hierarchy process (AHP) was applied to combine all of the factors affecting the benefits to patrons with disabilities. An extensive sensitivity analysis was performed to assess the reasonableness of the model outputs in response to changes in model constraints. Based on a case study using data from Broward County Transit (BCT) in Florida, the models were found to produce a list of bus stops that upon close examination were determined to be highly logical. Compared to traditional approaches using staff experience, requests from elected officials, customer complaints, etc., these optimization models offer a more objective and efficient platform on which to make bus stop improvement suggestions.
Resumo:
With the advantages and popularity of Permanent Magnet (PM) motors due to their high power density, there is an increasing incentive to use them in variety of applications including electric actuation. These applications have strict noise emission standards. The generation of audible noise and associated vibration modes are characteristics of all electric motors, it is especially problematic in low speed sensorless control rotary actuation applications using high frequency voltage injection technique. This dissertation is aimed at solving the problem of optimizing the sensorless control algorithm for low noise and vibration while achieving at least 12 bit absolute accuracy for speed and position control. The low speed sensorless algorithm is simulated using an improved Phase Variable Model, developed and implemented in a hardware-in-the-loop prototyping environment. Two experimental testbeds were developed and built to test and verify the algorithm in real time.^ A neural network based modeling approach was used to predict the audible noise due to the high frequency injected carrier signal. This model was created based on noise measurements in an especially built chamber. The developed noise model is then integrated into the high frequency based sensorless control scheme so that appropriate tradeoffs and mitigation techniques can be devised. This will improve the position estimation and control performance while keeping the noise below a certain level. Genetic algorithms were used for including the noise optimization parameters into the developed control algorithm.^ A novel wavelet based filtering approach was proposed in this dissertation for the sensorless control algorithm at low speed. This novel filter was capable of extracting the position information at low values of injection voltage where conventional filters fail. This filtering approach can be used in practice to reduce the injected voltage in sensorless control algorithm resulting in significant reduction of noise and vibration.^ Online optimization of sensorless position estimation algorithm was performed to reduce vibration and to improve the position estimation performance. The results obtained are important and represent original contributions that can be helpful in choosing optimal parameters for sensorless control algorithm in many practical applications.^
Design optimization of modern machine drive systems for maximum fault tolerant and optimal operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. ^ A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. ^ The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. ^ The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. ^ To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.^
Resumo:
Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^
Resumo:
We report the results of a study into the factors controlling the quality of nanolithographic imaging. Self-assembled monolayer (SAM) coverage, subsequent postetch pattern definition, and minimum feature size all depend on the quality of the Au substrate used in material mask atomic nanolithographic experiments. We find that sputtered Au substrates yield much smoother surfaces and a higher density of {111}-oriented grains than evaporated Au surfaces. Phase imaging with an atomic force microscope shows that the quality and percentage coverage of SAM adsorption are much greater for sputtered Au surfaces. Exposure of the self-assembled monolayer to an optically cooled atomic Cs beam traversing a two-dimensional array of submicron material masks mounted a few microns above the self-assembled monolayer surface allowed determination of the minimum average Cs dose (2 Cs atoms per self-assembled monolayer molecule) to write the monolayer. Suitable wet etching, with etch rates of 2.2 nm min-1, results in optimized pattern definition. Utilizing these optimizations, material mask features as small as 230 nm in diameter with a fractional depth gradient of 0.820 nm were realized.