972 resultados para simulation-optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have optimized the settings of evanescent wave imaging for the visualization of a protein adsorption layer. The enhancement of the evanescent wave at the interface brought by the incident angle, the polarized state of light beam as well as a gold layer is considered. In order to improve the image contrast of a protein monolayer in experiments, we have optimized three factors-the incident angle, the polarization of light beam, and the thickness of an introduced thin gold layer with a theoretical simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a batch file which describes the detailed structure and the corresponding physical process of Micro-Mesh Gaseous Structure (Micromegas) detector, the macro commands and the control structures based on the Garfield program has been developed. And using the Garfield program controlled by this batch file, the detector's gain and spatial resolution have been investigated under different conditions. These results obtained by the simulation program not only exhibit the influences of the mesh and drift voltage, the mixture gas proportion, the distance between the mesh cathode and the printed circuit board readout anode, and the Lines Per Inch of the mesh cathode on the gain and spatial resolution of the detector, but also are very important to optimize the design, shorten the experimental period, and save cost during the detector development. Additionally, they also indicate that the Garfield program is a powerful tool for the Micromegas detector design and optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The explicit expression between composition and mechanical properties of silicone rubber was derived from the physics of polymer elasticity, the implicit expression among material composition, reaction conditions and reaction efficiency was obtained from chemical thermodynamics and kinetics, and then an implicit multi-objective optimization model was constructed. Genetic algorithm was applied to optimize material composition and reaction conditions, and the finite element method of cross-linking reaction processes was used to solve multi-objective functions, on the basis of which a new optimization methodology of crosslinking reaction processes was established. Using this methodology, rubber materials can be designed according to pre-specified requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flip-chip technology is a high chip density solution to meet the demand for very large scale integration design. For wireless sensor node or some similar RF applications, due to the growing requirements for the wearable and implantable implementations, flip-chip appears to be a leading technology to realize the integration and miniaturization. In this paper, flip-chip is considered as part of the whole system to affect the RF performance. A simulation based design is presented to transfer the surface mount PCB board to the flip-chip die package for the RF applications. Models are built by Q3D Extractor to extract the equivalent circuit based on the parasitic parameters of the interconnections, for both bare die and wire-bonding technologies. All the parameters and the PCB layout and stack-up are then modeled in the essential parts' design of the flip-chip RF circuit. By implementing simulation and optimization, a flip-chip package is re-designed by the parameters given by simulation sweep. Experimental results fit the simulation well for the comparison between pre-optimization and post-optimization of the bare die package's return loss performance. This design method could generally be used to transfer any surface mount PCB to flip-chip package for the RF systems or to predict the RF specifications of a RF system using the flip-chip technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comparison study was carried out between a wireless sensor node with a bare die flip-chip mounted and its reference board with a BGA packaged transceiver chip. The main focus is the return loss (S parameter S11) at the antenna connector, which was highly depended on the impedance mismatch. Modeling including the different interconnect technologies, substrate properties and passive components, was performed to simulate the system in Ansoft Designer software. Statistical methods, such as the use of standard derivation and regression, were applied to the RF performance analysis, to see the impacts of the different parameters on the return loss. Extreme value search, following on the previous analysis, can provide the parameters' values for the minimum return loss. Measurements fit the analysis and simulation well and showed a great improvement of the return loss from -5dB to -25dB for the target wireless sensor node.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSN) are becoming widely adopted for many applications including complicated tasks like building energy management. However, one major concern for WSN technologies is the short lifetime and high maintenance cost due to the limited battery energy. One of the solutions is to scavenge ambient energy, which is then rectified to power the WSN. The objective of this thesis was to investigate the feasibility of an ultra-low energy consumption power management system suitable for harvesting sub-mW photovoltaic and thermoelectric energy to power WSNs. To achieve this goal, energy harvesting system architectures have been analyzed. Detailed analysis of energy storage units (ESU) have led to an innovative ESU solution for the target applications. Battery-less, long-lifetime ESU and its associated power management circuitry, including fast-charge circuit, self-start circuit, output voltage regulation circuit and hybrid ESU, using a combination of super-capacitor and thin film battery, were developed to achieve continuous operation of energy harvester. Low start-up voltage DC/DC converters have been developed for 1mW level thermoelectric energy harvesting. The novel method of altering thermoelectric generator (TEG) configuration in order to match impedance has been verified in this work. Novel maximum power point tracking (MPPT) circuits, exploring the fractional open circuit voltage method, were particularly developed to suit the sub-1mW photovoltaic energy harvesting applications. The MPPT energy model has been developed and verified against both SPICE simulation and implemented prototypes. Both indoor light and thermoelectric energy harvesting methods proposed in this thesis have been implemented into prototype devices. The improved indoor light energy harvester prototype demonstrates 81% MPPT conversion efficiency with 0.5mW input power. This important improvement makes light energy harvesting from small energy sources (i.e. credit card size solar panel in 500lux indoor lighting conditions) a feasible approach. The 50mm × 54mm thermoelectric energy harvester prototype generates 0.95mW when placed on a 60oC heat source with 28% conversion efficiency. Both prototypes can be used to continuously power WSN for building energy management applications in typical office building environment. In addition to the hardware development, a comprehensive system energy model has been developed. This system energy model not only can be used to predict the available and consumed energy based on real-world ambient conditions, but also can be employed to optimize the system design and configuration. This energy model has been verified by indoor photovoltaic energy harvesting system prototypes in long-term deployed experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.

This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.

On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.

In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.

We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,

and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.

In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates a modeling and design approach that couples computational mechanics techniques with numerical optimisation and statistical models for virtual prototyping and testing in different application areas concerning reliability of eletronic packages. The integrated software modules provide a design engineer in the electronic manufacturing sector with fast design and process solutions by optimizing key parameters and taking into account complexity of certain operational conditions. The integrated modeling framework is obtained by coupling the multi-phsyics finite element framework - PHYSICA - with the numerical optimisation tool - VisualDOC into a fully automated design tool for solutions of electronic packaging problems. Response Surface Modeling Methodolgy and Design of Experiments statistical tools plus numerical optimisaiton techniques are demonstrated as a part of the modeling framework. Two different problems are discussed and solved using the integrated numerical FEM-Optimisation tool. First, an example of thermal management of an electronic package on a board is illustrated. Location of the device is optimized to ensure reduced junction temperature and stress in the die subject to certain cooling air profile and other heat dissipating active components. In the second example thermo-mechanical simulations of solder creep deformations are presented to predict flip-chip reliability and subsequently used to optimise the life-time of solder interconnects under thermal cycling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents modelling and design optimization of a microfeeder which, as part of a microassembly system, is used for contactless object delivery. The microfeeder consists of an array of microactuators which are controlled by electrostatic actuation and used for maneuvering outcoming air jet for object hovering and delibery. The airflow behaviour in the microactuator is analysed by means of fluid mechanics and Computational Fluid Dynamics (CFD) simulation from three aspects, theoretical analysis, initial design assessment, and design modifications. The focus is put on the basic types of the microfeeder structure and the effects of structural details to the systematic performance. The structural pattern of the microactuator for forming airflow nozzle is identified and two design plans are proposed as basic structure patterns of pneumatic microactuators. The optimized design numerically shows the ability of delivering objects. This paper analyses the flow distribution pattern in microactuators and points out a way for effective design of pneumatic microfeeder systems. The optimization strategy provided by the present paper has close relevance to the design and manufacture of pneumatic microfeeder systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an optimization-based approach to the design of asymmetrical filter structures having the maximum number of return- or insertion-loss ripples in the passband such as those based upon Chebyshev function prototypes. The proposed approach. has the following advantages over the general purpose optimization techniques adopted previously such as: less frequency sampling is required, optimization is carried out with respect to the Chebyshev (or minimax) criterion, the problem of local minima does not arise, and optimization is usually only required for the passband. When implemented around an accurate circuit simulation, the method can be used to include all the effects of discontinuities, junctions, fringing, etc. to reduce the amount of tuning required in the final filter. The design of asymmetrical ridged-waveguide bandpass filters is considered as an example. Measurements on a fabricated filter confirm the accuracy of the design procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conventional radial basis function (RBF) network optimization methods, such as orthogonal least squares or the two-stage selection, can produce a sparse network with satisfactory generalization capability. However, the RBF width, as a nonlinear parameter in the network, is not easy to determine. In the aforementioned methods, the width is always pre-determined, either by trial-and-error, or generated randomly. Furthermore, all hidden nodes share the same RBF width. This will inevitably reduce the network performance, and more RBF centres may then be needed to meet a desired modelling specification. In this paper we investigate a new two-stage construction algorithm for RBF networks. It utilizes the particle swarm optimization method to search for the optimal RBF centres and their associated widths. Although the new method needs more computation than conventional approaches, it can greatly reduce the model size and improve model generalization performance. The effectiveness of the proposed technique is confirmed by two numerical simulation examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work is focused on the demonstration of the advantages of miniaturized reactor systems which are essential for processes where potential for considerable heat transfer intensification exists as well as for kinetic studies of highly exothermic reactions at near-isothermal conditions. The heat transfer characteristics of four different cross-flow designs of a microstructured reactor/heat-exchanger (MRHE) were studied by CFD simulation using ammonia oxidation on a platinum catalyst as a model reaction. An appropriate distribution of the nitrogen flow used as a coolant can decrease drastically the axial temperature gradient in the reaction channels. In case of a microreactor made of a highly conductive material, the temperature non-uniformity in the reactor is strongly dependent on the distance between the reaction and cooling channels. Appropriate design of a single periodic reactor/heat-exchanger unit, combined with a non-uniform inlet coolant distribution, reduces the temperature gradients in the complete reactor to less than 4degreesC, even at conditions corresponding to an adiabatic temperature rise of about 1400degreesC, which are generally not accessible in conventional reactors because of the danger of runaway reactions. To obtain the required coolant flow distribution, an optimization study was performed to acquire the particular geometry of the inlet and outlet chambers in the microreactor/heat-exchanger. The predicted temperature profiles are in good agreement with experimental data from temperature sensors located along the reactant and coolant flows. The results demonstrate the clear potential of microstructured devices as reliable instruments for kinetic research as well as for proper heat management in the case of highly exothermic reactions. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuzzy-neural-network-based inference systems are well-known universal approximators which can produce linguistically interpretable results. Unfortunately, their dimensionality can be extremely high due to an excessive number of inputs and rules, which raises the need for overall structure optimization. In the literature, various input selection methods are available, but they are applied separately from rule selection, often without considering the fuzzy structure. This paper proposes an integrated framework to optimize the number of inputs and the number of rules simultaneously. First, a method is developed to select the most significant rules, along with a refinement stage to remove unnecessary correlations. An improved information criterion is then proposed to find an appropriate number of inputs and rules to include in the model, leading to a balanced tradeoff between interpretability and accuracy. Simulation results confirm the efficacy of the proposed method.