970 resultados para Design optimization
Resumo:
In this paper, we investigate the design of few-mode fibers (FMFs) guiding 2 to 12 linearly polarized (LP) modes with low differential mode delay (DMD) over the C-band, suitable for long-haul transmission. Two different types of refractive index profile have been considered: a graded-core with a cladding trench (GCCT) profile and a multi-step-index (MSI) profile. The profiles parameters are optimized in order to achieve: the lowest possible DMD and macro-bend losses (MBL) lower than the ITU-T standard recommendation. The optimization results show that the MSI profiles present lower DMD than the minimum achieved with a GCCT profile. Moreover, it is shown that the optimum DMD and the MBL scale with the number of modes for both profiles. The optimum DMD obtained for 12 LP modes is lower than 3 ps/km using a GCCT profile and lower than 2.5 ps/km using a MSI profile. The optimization results reveal that the most preponderant parameter of the GCCT profile is the refractive index relative difference at the core center, Δnco. Reducing Δn co, the DMD is reduced at the expense of increasing the MBL. Regarding the MSI profiles, it is shown that 64 steps are required to obtain a DMD improvement considering 12 LP modes. Finally, the impact of the fabrication margins on the optimum DMD is analyzed. The probability of having a manufactured FMF with 12 LP modes and DMD lower than 12 ps/km is approximately 68% using a GCCT profile and 16% using a MSI profile. © 2013 IEEE.
Resumo:
We show an improved DPSK receiver design which can increase useful dispersion tolerance by up to a factor of two. The increased dispersion tolerance is achieved through optimization of the optical filter at the receiver and the delay of the Mach-Zehnder interferometer. In this paper we fully explain the concept, quantify the gain and provide an explanation for the operation of the receiver. © 2007 Optical Society of America.
Resumo:
In-fiber microchannels were fabricated directly in standard single mode fiber using the femtosecond laser inscribe and etch technique. This method of creating in-fiber microchannels offers great versatility, since it allows complex three-dimensional structures to be inscribed and then preferentially etched with hydrofluoric acid. In addition, inscription does not require a photosensitive fiber; the modification is induced through nonlinear processes triggered by an ultrashort laser pulse. Four in-fiber microchannel designs were experimentally investigated using this technique - microhole, microslot channel along the core, microslot channel perpendicular to the core and helical channel around the core. Each device design was evaluated through monitoring the optical spectral change while inserting a range of index matching oils into each microchannel; an R.I. sensitivity up to 1.55 dB/RIU was achieved in these initial tests. Furthermore, an all femtosecond laser inscribed Fabry-Pérot-based refractometer with an R.I. sensitivity of 2.75 nm/RIU was also demonstrated. The Fabry-Pérot refractometer was formed by positioning a microchannel between two femtosecond laser inscribed point-by-point fiber Bragg gratings.
Resumo:
This paper presents a new, dynamic feature representation method for high value parts consisting of complex and intersecting features. The method first extracts features from the CAD model of a complex part. Then the dynamic status of each feature is established between various operations to be carried out during the whole manufacturing process. Each manufacturing and verification operation can be planned and optimized using the real conditions of a feature, thus enhancing accuracy, traceability and process control. The dynamic feature representation is complementary to the design models used as underlining basis in current CAD/CAM and decision support systems. © 2012 CIRP.
Resumo:
The development of new, health supporting food of high quality and the optimization of food technological processes today require the application of statistical methods of experimental design. The principles and steps of statistical planning and evaluation of experiments will be explained. By example of the development of a gluten-free rusk (zwieback), which is enriched by roughage compounds the application of a simplex-centroid mixture design will be shown. The results will be illustrated by different graphics.
Resumo:
Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^
Resumo:
Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.
Resumo:
The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^
Resumo:
Bus stops are key links in the journeys of transit patrons with disabilities. Inaccessible bus stops prevent people with disabilities from using fixed-route bus services, thus limiting their mobility. The Americans with Disabilities Act (ADA) of 1990 prescribes the minimum requirements for bus stop accessibility by riders with disabilities. Due to limited budgets, transit agencies can only select a limited number of bus stop locations for ADA improvements annually. These locations should preferably be selected such that they maximize the overall benefits to patrons with disabilities. In addition, transit agencies may also choose to implement the universal design paradigm, which involves higher design standards than current ADA requirements and can provide amenities that are useful for all riders, like shelters and lighting. Many factors can affect the decision to improve a bus stop, including rider-based aspects like the number of riders with disabilities, total ridership, customer complaints, accidents, deployment costs, as well as locational aspects like the location of employment centers, schools, shopping areas, and so on. These interlacing factors make it difficult to identify optimum improvement locations without the aid of an optimization model. This dissertation proposes two integer programming models to help identify a priority list of bus stops for accessibility improvements. The first is a binary integer programming model designed to identify bus stops that need improvements to meet the minimum ADA requirements. The second involves a multi-objective nonlinear mixed integer programming model that attempts to achieve an optimal compromise among the two accessibility design standards. Geographic Information System (GIS) techniques were used extensively to both prepare the model input and examine the model output. An analytic hierarchy process (AHP) was applied to combine all of the factors affecting the benefits to patrons with disabilities. An extensive sensitivity analysis was performed to assess the reasonableness of the model outputs in response to changes in model constraints. Based on a case study using data from Broward County Transit (BCT) in Florida, the models were found to produce a list of bus stops that upon close examination were determined to be highly logical. Compared to traditional approaches using staff experience, requests from elected officials, customer complaints, etc., these optimization models offer a more objective and efficient platform on which to make bus stop improvement suggestions.
Resumo:
Today, over 15,000 Ion Mobility Spectrometry (IMS) analyzers are employed at worldwide security checkpoints to detect explosives and illicit drugs. Current portal IMS instruments and other electronic nose technologies detect explosives and drugs by analyzing samples containing the headspace air and loose particles residing on a surface. Canines can outperform these systems at sampling and detecting the low vapor pressure explosives and drugs, such as RDX, PETN, cocaine, and MDMA, because these biological detectors target the volatile signature compounds available in the headspace rather than the non-volatile parent compounds of explosives and drugs.^ In this dissertation research volatile signature compounds available in the headspace over explosive and drug samples were detected using SPME as a headspace sampling tool coupled to an IMS analyzer. A Genetic Algorithm (GA) technique was developed to optimize the operating conditions of a commercial IMS (GE Itemizer 2), leading to the successful detection of plastic explosives (Detasheet, Semtex H, and C-4) and illicit drugs (cocaine, MDMA, and marijuana). Short sampling times (between 10 sec to 5 min) were adequate to extract and preconcentrate sufficient analytes (> 20 ng) representing the volatile signatures in the headspace of a 15 mL glass vial or a quart-sized can containing ≤ 1 g of the bulk explosive or drug.^ Furthermore, a research grade IMS with flexibility for changing operating conditions and physical configurations was designed and fabricated to accommodate future research into different analytes or physical configurations. The design and construction of the FIU-IMS were facilitated by computer modeling and simulation of ion’s behavior within an IMS. The simulation method developed uses SIMION/SDS and was evaluated with experimental data collected using a commercial IMS (PCP Phemto Chem 110). The FIU-IMS instrument has comparable performance to the GE Itemizer 2 (average resolving power of 14, resolution of 3 between two drugs and two explosives, and LODs range from 0.7 to 9 ng). ^ The results from this dissertation further advance the concept of targeting volatile components to presumptively detect the presence of concealed bulk explosives and drugs by SPME-IMS, and the new FIU-IMS provides a flexible platform for future IMS research projects.^
Resumo:
A wireless mesh network is a mesh network implemented over a wireless network system such as wireless LANs. Wireless Mesh Networks(WMNs) are promising for numerous applications such as broadband home networking, enterprise networking, transportation systems, health and medical systems, security surveillance systems, etc. Therefore, it has received considerable attention from both industrial and academic researchers. This dissertation explores schemes for resource management and optimization in WMNs by means of network routing and network coding.^ In this dissertation, we propose three optimization schemes. (1) First, a triple-tier optimization scheme is proposed for load balancing objective. The first tier mechanism achieves long-term routing optimization, and the second tier mechanism, using the optimization results obtained from the first tier mechanism, performs the short-term adaptation to deal with the impact of dynamic channel conditions. A greedy sub-channel allocation algorithm is developed as the third tier optimization scheme to further reduce the congestion level in the network. We conduct thorough theoretical analysis to show the correctness of our design and give the properties of our scheme. (2) Then, a Relay-Aided Network Coding scheme called RANC is proposed to improve the performance gain of network coding by exploiting the physical layer multi-rate capability in WMNs. We conduct rigorous analysis to find the design principles and study the tradeoff in the performance gain of RANC. Based on the analytical results, we provide a practical solution by decomposing the original design problem into two sub-problems, flow partition problem and scheduling problem. (3) Lastly, a joint optimization scheme of the routing in the network layer and network coding-aware scheduling in the MAC layer is introduced. We formulate the network optimization problem and exploit the structure of the problem via dual decomposition. We find that the original problem is composed of two problems, routing problem in the network layer and scheduling problem in the MAC layer. These two sub-problems are coupled through the link capacities. We solve the routing problem by two different adaptive routing algorithms. We then provide a distributed coding-aware scheduling algorithm. According to corresponding experiment results, the proposed schemes can significantly improve network performance.^
Resumo:
Bio-molecular interactions exist ubiquitously in all biological systems. This dissertation project was to construct a powerful surface plasmon resonance (SPR) sensor. The SPR system is used to study bio-molecular interactions in real time and without labeling. Surface plasmon is the oscillation of free electrons in metals coupled with surface electromagnetic waves. These surface electromagnetic waves provide a sensitive probe to study bio-molecular interactions on metal surfaces. This project resulted in the successful construction and optimization of a homemade SPR sensor and the development of several new powerful protocols to study bio-molecular interactions. It was discovered through this project that the limitations of earlier SPR sensors are related not only to the instrumentation design and operating procedures, but also to the complex behaviors of bio-molecules on sensor surfaces that were very different from that in solution. Based on these discoveries the instrumentation design and operating procedures were fully optimized. A set of existing sensor surface treatment protocols were tested and evaluated and new protocols were developed in this project. The new protocols have demonstrated excellent performance to study biomolecular interactions. The optimized home-made SPR sensor was used to study protein-surface interactions. These protein-surface interactions are responsible for many complex organic cell activities. The co-existence of different driving forces and their correlation with the structure of the protein and the surface make the understanding of the fundamental mechanism of protein-surface interactions a very challenging task. Using the improved SPR sensor, the electrostatic interaction and hydrophobic interaction were studied separately. The results of this project directly confirmed the theoretical predictions for electrostatic force between the protein and surface. In addition, this project demonstrated that the strength of the protein-surface hydrophobic interaction does not solely depend on the hydrophobicity as reported earlier. Surface structure also plays a significant role.
Resumo:
Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
The design, construction and optimization of a low power-high temperature heated ceramic sensor to detect leaking of halogen gases in refrigeration systems are presented. The manufacturing process was done with microelectronic assembly and the Low Temperature Cofire Ceramic (LTCC) technique. Four basic sensor materials were fabricated and tested: Li2SiO3, Na2SiO3, K2SiO3, and CaSiO 3. The evaluation of the sensor material, sensor size, operating temperature, bias voltage, electrodes size, firing temperature, gas flow, and sensor life was done. All sensors responded to the gas showing stability and reproducibility. Before exposing the sensor to the gas, the sensor was modeled like a resistor in series and the calculations obtained were in agreement with the experimental values. The sensor response to the gas was divided in surface diffusion and bulk diffusion; both were analyzed showing agreement between the calculations and the experimental values. The sensor with 51.5%CaSiO3 + 48.5%Li 2SiO3 shows the best results, including a stable current and response to the gas. ^