866 resultados para OPTIMIZATION MODEL
Resumo:
A model is developed for predicting the resolution of interested component pair and calculating the optimum temperature programming condition in the comprehensive two-dimensional gas chromatography (GC x GC). Based on at least three isothermal runs, retention times and the peak widths at half-height on both dimensions are predicted for any kind of linear temperature-programmed run on the first dimension and isothermal runs on the second dimension. The calculation of the optimum temperature programming condition is based on the prediction of the resolution of "difficult-to-separate components" in a given mixture. The resolution of all the neighboring peaks on the first dimension is obtained by the predicted retention time and peak width on the first dimension, the resolution on the second dimension is calculated only for the adjacent components with un-enough resolution on the first dimension and eluted within a same modulation period on the second dimension. The optimum temperature programming condition is acquired when the resolutions of all components of interest by GC x GC separation meet the analytical requirement and the analysis time is the shortest. The validity of the model has been proven by using it to predict and optimize GC x GC temperature programming condition of an alkylpyridine mixture. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A computer model has been developed to optimize the performance of a 50kWp photovoltaic system which supplies electrical energy to a dairy farm at Fota Island in Cork Harbour. Optimization of the system involves maximising the efficiency and increasing the performance and reliability of each hardware unit. The model accepts horizontal insolation, ambient temperature, wind speed, wind direction and load demand as inputs. An optimization program uses the computer model to simulate the optimum operating conditions. From this analysis, criteria are established which are used to improve the photovoltaic system operation. This thesis describes the model concepts, the model implementation and the model verification procedures used during development. It also describes the techniques which are used during system optimization. The software, which is written in FORTRAN, is structured in modular units to provide logical and efficient programming. These modular units may also be used in the modelling and optimization of other photovoltaic systems.
Resumo:
Wireless sensor networks (WSN) are becoming widely adopted for many applications including complicated tasks like building energy management. However, one major concern for WSN technologies is the short lifetime and high maintenance cost due to the limited battery energy. One of the solutions is to scavenge ambient energy, which is then rectified to power the WSN. The objective of this thesis was to investigate the feasibility of an ultra-low energy consumption power management system suitable for harvesting sub-mW photovoltaic and thermoelectric energy to power WSNs. To achieve this goal, energy harvesting system architectures have been analyzed. Detailed analysis of energy storage units (ESU) have led to an innovative ESU solution for the target applications. Battery-less, long-lifetime ESU and its associated power management circuitry, including fast-charge circuit, self-start circuit, output voltage regulation circuit and hybrid ESU, using a combination of super-capacitor and thin film battery, were developed to achieve continuous operation of energy harvester. Low start-up voltage DC/DC converters have been developed for 1mW level thermoelectric energy harvesting. The novel method of altering thermoelectric generator (TEG) configuration in order to match impedance has been verified in this work. Novel maximum power point tracking (MPPT) circuits, exploring the fractional open circuit voltage method, were particularly developed to suit the sub-1mW photovoltaic energy harvesting applications. The MPPT energy model has been developed and verified against both SPICE simulation and implemented prototypes. Both indoor light and thermoelectric energy harvesting methods proposed in this thesis have been implemented into prototype devices. The improved indoor light energy harvester prototype demonstrates 81% MPPT conversion efficiency with 0.5mW input power. This important improvement makes light energy harvesting from small energy sources (i.e. credit card size solar panel in 500lux indoor lighting conditions) a feasible approach. The 50mm × 54mm thermoelectric energy harvester prototype generates 0.95mW when placed on a 60oC heat source with 28% conversion efficiency. Both prototypes can be used to continuously power WSN for building energy management applications in typical office building environment. In addition to the hardware development, a comprehensive system energy model has been developed. This system energy model not only can be used to predict the available and consumed energy based on real-world ambient conditions, but also can be employed to optimize the system design and configuration. This energy model has been verified by indoor photovoltaic energy harvesting system prototypes in long-term deployed experiments.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
Quantitative models are required to engineer biomaterials with environmentally responsive properties. With this goal in mind, we developed a model that describes the pH-dependent phase behavior of a class of stimulus responsive elastin-like polypeptides (ELPs) that undergo reversible phase separation in response to their solution environment. Under isothermal conditions, charged ELPs can undergo phase separation when their charge is neutralized. Optimization of this behavior has been challenging because the pH at which they phase separate, pHt, depends on their composition, molecular weight, concentration, and temperature. To address this problem, we developed a quantitative model to describe the phase behavior of charged ELPs that uses the Henderson-Hasselbalch relationship to describe the effect of side-chain ionization on the phase-transition temperature of an ELP. The model was validated with pH-responsive ELPs that contained either acidic (Glu) or basic (His) residues. The phase separation of both ELPs fit this model across a range of pH. These results have important implications for applications of pH-responsive ELPs because they provide a quantitative model for the rational design of pH-responsive polypeptides whose transition can be triggered at a specified pH.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
Melting of metallic samples in a cold crucible causes inclusions to concentrate on the surface owing to the action of the electromagnetic force in the skin layer. This process is dynamic, involving the melting stage, then quasi-stationary particle separation, and finally the solidification in the cold crucible. The proposed modeling technique is based on the pseudospectral solution method for coupled turbulent fluid flow, thermal and electromagnetic fields within the time varying fluid volume contained by the free surface, and partially the solid crucible wall. The model uses two methods for particle tracking: (1) a direct Lagrangian particle path computation and (2) a drifting concentration model. Lagrangian tracking is implemented for arbitrary unsteady flow. A specific numerical time integration scheme is implemented using implicit advancement that permits relatively large time-steps in the Lagrangian model. The drifting concentration model is based on a local equilibrium drift velocity assumption. Both methods are compared and demonstrated to give qualitatively similar results for stationary flow situations. The particular results presented are obtained for iron alloys. Small size particles of the order of 1 μm are shown to be less prone to separation by electromagnetic field action. In contrast, larger particles, 10 to 100 μm, are easily “trapped” by the electromagnetic field and stay on the sample surface at predetermined locations depending on their size and properties. The model allows optimization for melting power, geometry, and solidification rate.
Resumo:
We consider the optimum design of pilot-symbol-assisted modulation (PSAM) schemes with feedback. The received signal is periodically fed back to the transmitter through a noiseless delayed link and the time-varying channel is modeled as a Gauss-Markov process. We optimize a lower bound on the channel capacity which incorporates the PSAM parameters and Kalman-based channel estimation and prediction. The parameters available for the capacity optimization are the data power adaptation strategy, pilot spacing and pilot power ratio, subject to an average power constraint. Compared to the optimized open-loop PSAM (i.e., the case where no feedback is provided from the receiver), our results show that even in the presence of feedback delay, the optimized power adaptation provides higher information rates at low signal-to-noise ratios (SNR) in medium-rate fading channels. However, in fast fading channels, even the presence of modest feedback delay dissipates the advantages of power adaptation.
Resumo:
Surrogate-based-optimization methods provide a means to achieve high-fidelity design optimization at reduced computational cost by using a high-fidelity model in combination with lower-fidelity models that are less expensive to evaluate. This paper presents a provably convergent trust-region model-management methodology for variableparameterization design models: that is, models for which the design parameters are defined over different spaces. Corrected space mapping is introduced as a method to map between the variable-parameterization design spaces. It is then used with a sequential-quadratic-programming-like trust-region method for two aerospace-related design optimization problems. Results for a wing design problem and a flapping-flight problem show that the method outperforms direct optimization in the high-fidelity space. On the wing design problem, the new method achieves 76% savings in high-fidelity function calls. On a bat-flight design problem, it achieves approximately 45% time savings, although it converges to a different local minimum than did the benchmark.
Resumo:
There is an increasing need to identify the effect of mix composition on the rheological properties of cementitious grouts using minislump, Marsh cone, cohesion plate, washout test, and cubes to determine the fluidity, the cohesion, and other mechanical properties of grouting applications. Mixture proportioning involves the tailoring of several parameters to achieve adequate fluidity, cohesion, washout resistance and compressive strength. This paper proposes a statistical design approach using a composite fractional factorial design which was carried out to model the influence of key parameters on the performance of cement grouts. The responses relate to performance included minislump, flow time using Marsh cone, cohesion measured by Lombardi plate meter, washout mass loss and compressive strength at 3, 7, and 28 days. The statistical models are valid for mixtures with water-to-binder ratio of 0.37–0.53, 0.4–1.8% addition of high-range water reducer (HRWR) by mass of binder, 4–12% additive of silica fume as replacement of cement by mass, and 0.02–0.8% addition of viscosity modifying admixture (VMA) by mass of binder. The models enable the identification of underlying factors and interactions that influence the modeled responses of cement grout. The comparison between the predicted and measured responses indicated good accuracy of the established models to describe the effect of the independent variables on the fluidity, cohesion, washout resistance and the compressive strength. This paper demonstrates the usefulness of the models to better understand trade-offs between parameters. The multiparametric optimization is used to establish isoresponses for a desirability function for cement grout. An increase of HRWR led to an increase of fluidity and washout, a reduction in plate cohesion value, and a reduction in the Marsh cone time. An increase of VMA demonstrated a reduction of fluidity and the washout mass loss, and an increase of Marsh cone time and plate cohesion. Results indicate that the use of silica fume increased the cohesion plate and Marsh cone, and reduced the minislump. Additionally, the silica fume improved the compressive strength and the washout resistance.
Resumo:
The conventional radial basis function (RBF) network optimization methods, such as orthogonal least squares or the two-stage selection, can produce a sparse network with satisfactory generalization capability. However, the RBF width, as a nonlinear parameter in the network, is not easy to determine. In the aforementioned methods, the width is always pre-determined, either by trial-and-error, or generated randomly. Furthermore, all hidden nodes share the same RBF width. This will inevitably reduce the network performance, and more RBF centres may then be needed to meet a desired modelling specification. In this paper we investigate a new two-stage construction algorithm for RBF networks. It utilizes the particle swarm optimization method to search for the optimal RBF centres and their associated widths. Although the new method needs more computation than conventional approaches, it can greatly reduce the model size and improve model generalization performance. The effectiveness of the proposed technique is confirmed by two numerical simulation examples.
Resumo:
In this paper we investigate the influence of a power-law noise model, also called noise, on the performance of a feed-forward neural network used to predict time series. We introduce an optimization procedure that optimizes the parameters the neural networks by maximizing the likelihood function based on the power-law model. We show that our optimization procedure minimizes the mean squared leading to an optimal prediction. Further, we present numerical results applying method to time series from the logistic map and the annual number of sunspots demonstrate that a power-law noise model gives better results than a Gaussian model.
Resumo:
To improve the performance of classification using Support Vector Machines (SVMs) while reducing the model selection time, this paper introduces Differential Evolution, a heuristic method for model selection in two-class SVMs with a RBF kernel. The model selection method and related tuning algorithm are both presented. Experimental results from application to a selection of benchmark datasets for SVMs show that this method can produce an optimized classification in less time and with higher accuracy than a classical grid search. Comparison with a Particle Swarm Optimization (PSO) based alternative is also included.
Resumo:
The motivation for this paper is to present an approach for rating the quality of the parameters in a computer-aided design model for use as optimization variables. Parametric Effectiveness is computed as the ratio of change in performance achieved by perturbing the parameters in the optimum way, to the change in performance that would be achieved by allowing the boundary of the model to move without the constraint on shape change enforced by the CAD parameterization. The approach is applied in this paper to optimization based on adjoint shape sensitivity analyses. The derivation of parametric effectiveness is presented for optimization both with and without the constraint of constant volume. In both cases, the movement of the boundary is normalized with respect to a small root mean squared movement of the boundary. The approach can be used to select an initial search direction in parameter space, or to select sets of model parameters which have the greatest ability to improve model performance. The approach is applied to a number of example 2D and 3D FEA and CFD problems.
Resumo:
The present work is focused on the demonstration of the advantages of miniaturized reactor systems which are essential for processes where potential for considerable heat transfer intensification exists as well as for kinetic studies of highly exothermic reactions at near-isothermal conditions. The heat transfer characteristics of four different cross-flow designs of a microstructured reactor/heat-exchanger (MRHE) were studied by CFD simulation using ammonia oxidation on a platinum catalyst as a model reaction. An appropriate distribution of the nitrogen flow used as a coolant can decrease drastically the axial temperature gradient in the reaction channels. In case of a microreactor made of a highly conductive material, the temperature non-uniformity in the reactor is strongly dependent on the distance between the reaction and cooling channels. Appropriate design of a single periodic reactor/heat-exchanger unit, combined with a non-uniform inlet coolant distribution, reduces the temperature gradients in the complete reactor to less than 4degreesC, even at conditions corresponding to an adiabatic temperature rise of about 1400degreesC, which are generally not accessible in conventional reactors because of the danger of runaway reactions. To obtain the required coolant flow distribution, an optimization study was performed to acquire the particular geometry of the inlet and outlet chambers in the microreactor/heat-exchanger. The predicted temperature profiles are in good agreement with experimental data from temperature sensors located along the reactant and coolant flows. The results demonstrate the clear potential of microstructured devices as reliable instruments for kinetic research as well as for proper heat management in the case of highly exothermic reactions. (C) 2002 Elsevier Science B.V. All rights reserved.