912 resultados para Optimization methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Previous studies have demonstrated that treatment strategy plays a critical role in ensuring maximum stone fragmentation during shockwave lithotripsy (SWL). We aimed to develop an optimal treatment strategy in SWL to produce maximum stone fragmentation. MATERIALS AND METHODS: Four treatment strategies were evaluated using an in-vitro experimental setup that mimics stone fragmentation in the renal pelvis. Spherical stone phantoms were exposed to 2100 shocks using the Siemens Modularis (electromagnetic) lithotripter. The treatment strategies included increasing output voltage with 100 shocks at 12.3 kV, 400 shocks at 14.8 kV, and 1600 shocks at 15.8 kV, and decreasing output voltage with 1600 shocks at 15.8 kV, 400 shocks at 14.8 kV, and 100 shocks at 12.3 kV. Both increasing and decreasing voltages models were run at a pulse repetition frequency (PRF) of 1 and 2 Hz. Fragmentation efficiency was determined using a sequential sieving method to isolate fragments less than 2 mm. A fiberoptic probe hydrophone was used to characterize the pressure waveforms at different output voltage and frequency settings. In addition, a high-speed camera was used to assess cavitation activity in the lithotripter field that was produced by different treatment strategies. RESULTS: The increasing output voltage strategy at 1 Hz PRF produced the best stone fragmentation efficiency. This result was significantly better than the decreasing voltage strategy at 1 Hz PFR (85.8% vs 80.8%, P=0.017) and over the same strategy at 2 Hz PRF (85.8% vs 79.59%, P=0.0078). CONCLUSIONS: A pretreatment dose of 100 low-voltage output shockwaves (SWs) at 60 SWs/min before increasing to a higher voltage output produces the best overall stone fragmentation in vitro. These findings could lead to increased fragmentation efficiency in vivo and higher success rates clinically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.

This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.

On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.

In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.

We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,

and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.

In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuzzy-neural-network-based inference systems are well-known universal approximators which can produce linguistically interpretable results. Unfortunately, their dimensionality can be extremely high due to an excessive number of inputs and rules, which raises the need for overall structure optimization. In the literature, various input selection methods are available, but they are applied separately from rule selection, often without considering the fuzzy structure. This paper proposes an integrated framework to optimize the number of inputs and the number of rules simultaneously. First, a method is developed to select the most significant rules, along with a refinement stage to remove unnecessary correlations. An improved information criterion is then proposed to find an appropriate number of inputs and rules to include in the model, leading to a balanced tradeoff between interpretability and accuracy. Simulation results confirm the efficacy of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the current investigation, rubber/clay nanocomposites were prepared by two different methods using hydrogenated nitrile butadiene rubber (HNBR) and the organoclay namely Cloisite 15A (C15A). A new novel approach involving swelling of C15A by ulltrasonication in HNBR solution has been carried out for improving the exfoliation and compatibilization of organoclays with HNBR matrix. With the addition of 5phr of clay, the elongation at break and tear strength improved by 16% and 24% respectively. The effect of coupling agents namely amino functional silane and tetrasulfido silane on the nanocomposites have been investigated. The elongation at break and tear strength improved by 46% and 77% respectively with the use of silanes. The improvement in the mechanical properties attributes to improved interaction between the organoclays and HNBR matrix. This interaction has been studied by X-ray diffraction and transmission electron microscope. Pre-dispersion technique clearly suggests very good improvement in the dispersion and properties due to better filler-rubber compatibility. © 2010 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantum annealing is a promising tool for solving optimization problems, similar in some ways to the traditional ( classical) simulated annealing of Kirkpatrick et al. Simulated annealing takes advantage of thermal fluctuations in order to explore the optimization landscape of the problem at hand, whereas quantum annealing employs quantum fluctuations. Intriguingly, quantum annealing has been proved to be more effective than its classical counterpart in many applications. We illustrate the theory and the practical implementation of both classical and quantum annealing - highlighting the crucial differences between these two methods - by means of results recently obtained in experiments, in simple toy-models, and more challenging combinatorial optimization problems ( namely, Random Ising model and Travelling Salesman Problem). The techniques used to implement quantum and classical annealing are either deterministic evolutions, for the simplest models, or Monte Carlo approaches, for harder optimization tasks. We discuss the pro and cons of these approaches and their possible connections to the landscape of the problem addressed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aircraft design is a complex, long and iterative process that requires the use of various specialties and optimization tools. However these tools and specialities do not include manufacturing, which is often considered later in the product development process leading to higher cost and time delays. This work focuses on the development of an automated design tool that accounts for manufacture during the design process focusing on early geometry definition which in turn informs assembly planning. To accomplish this task the design process needs to be open to any variation in structural configuration while maintaining the design intent. Redefining design intent as a map which links a set of requirements to a set of functions using a numerical approach enables the design process itself to be considered as a mathematical function. This definition enables the design process to utilise captured design knowledge and translate it into a set of mathematical equations that design the structure. This process is articulated in this paper using the structural design and definition for an aircraft fuselage section as an exemplar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern control methods like optimal control and model predictive control (MPC) provide a framework for simultaneous regulation of the tracking performance and limiting the control energy, thus have been widely deployed in industrial applications. Yet, due to its simplicity and robustness, the conventional P (Proportional) and PI (Proportional–Integral) control are still the most common methods used in many engineering systems, such as electric power systems, automotive, and Heating, Ventilation and Air Conditioning (HVAC) for buildings, where energy efficiency and energy saving are the critical issues to be addressed. Yet, little has been done so far to explore the effect of its parameter tuning on both the system performance and control energy consumption, and how these two objectives are correlated within the P and PI control framework. In this paper, the P and PI controllers are designed with a simultaneous consideration of these two aspects. Two case studies are investigated in detail, including the control of Voltage Source Converters (VSCs) for transmitting offshore wind power to onshore AC grid through High Voltage DC links, and the control of HVAC systems. Results reveal that there exists a better trade-off between the tracking performance and the control energy through a proper choice of the P and PI controller parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An environment has been created for the optimisation of aerofoil profiles with inclusion of small surface features. For TS wave dominated flows, the paper examines the consequences of the addition of a depression on the aerodynamic optimisation of an NLF aerofoil, and describes the geometry definition fidelity and optimisation algorithm employed in the development process. The variables that define the depression for this optimisation investigation have been fixed, however a preliminary study is presented demonstrating the sensitivity of the flow to the depression characteristics. Solutions to the optimisation problem are then presented using both gradient-based and genetic algorithm techniques, and for accurate representation of the inclusion of small surface perturbations it is concluded that a global optimisation method is required for this type of aerofoil optimisation task due to the nature of the response surface generated. When dealing with surface features, changes in the transition onset are likely to be of a non-linear nature so it is highly critical to have an optimisation algorithm that is robust, suggesting that for this framework, gradient-based methods alone are not suited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The worsening of process variations and the consequent increased spreads in circuit performance and consumed power hinder the satisfaction of the targeted budgets and lead to yield loss. Corner based design and adoption of design guardbands might limit the yield loss. However, in many cases such methods may not be able to capture the real effects which might be way better than the predicted ones leading to increasingly pessimistic designs. The situation is even more severe in memories which consist of substantially different individual building blocks, further complicating the accurate analysis of the impact of variations at the architecture level leaving many potential issues uncovered and opportunities unexploited. In this paper, we develop a framework for capturing non-trivial statistical interactions among all the components of a memory/cache. The developed tool is able to find the optimum memory/cache configuration under various constraints allowing the designers to make the right choices early in the design cycle and consequently improve performance, energy, and especially yield. Our, results indicate that the consideration of the architectural interactions between the memory components allow to relax the pessimistic access times that are predicted by existing techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os problemas de visibilidade têm diversas aplicações a situações reais. Entre os mais conhecidos, e exaustivamente estudados, estão os que envolvem os conceitos de vigilância e ocultação em estruturas geométricas (problemas de vigilância e ocultação). Neste trabalho são estudados problemas de visibilidade em estruturas geométricas conhecidas como polígonos, uma vez que estes podem representar, de forma apropriada, muitos dos objectos reais e são de fácil manipulação computacional. O objectivo dos problemas de vigilância é a determinação do número mínimo de posições para a colocação de dispositivos num dado polígono, de modo a que estes dispositivos consigam “ver” a totalidade do polígono. Por outro lado, o objectivo dos problemas de ocultação é a determinação do número máximo de posições num dado polígono, de modo a que quaisquer duas posições não se consigam “ver”. Infelizmente, a maior parte dos problemas de visibilidade em polígonos são NP-difíceis, o que dá origem a duas linhas de investigação: o desenvolvimento de algoritmos que estabelecem soluções aproximadas e a determinação de soluções exactas para classes especiais de polígonos. Atendendo a estas duas linhas de investigação, o trabalho é dividido em duas partes. Na primeira parte são propostos algoritmos aproximados, baseados essencialmente em metaheurísticas e metaheurísticas híbridas, para resolver alguns problemas de visibilidade, tanto em polígonos arbitrários como ortogonais. Os problemas estudados são os seguintes: “Maximum Hidden Vertex Set problem”, “Minimum Vertex Guard Set problem”, “Minimum Vertex Floodlight Set problem” e “Minimum Vertex k-Modem Set problem”. São também desenvolvidos métodos que permitem determinar a razão de aproximação dos algoritmos propostos. Para cada problema são implementados os algoritmos apresentados e é realizado um estudo estatístico para estabelecer qual o algoritmo que obtém as melhores soluções num tempo razoável. Este estudo permite concluir que as metaheurísticas híbridas são, em geral, as melhores estratégias para resolver os problemas de visibilidade estudados. Na segunda parte desta dissertação são abordados os problemas “Minimum Vertex Guard Set”, “Maximum Hidden Set” e “Maximum Hidden Vertex Set”, onde são identificadas e estudadas algumas classes de polígonos para as quais são determinadas soluções exactas e/ou limites combinatórios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmental transport of pollutants comprises distinct processes such as volatilization, leaching and surface runoff. Sorption is one of the most important phenomena that affects leaching, and thus the fate of hydrophobic organic pollutants in soils and also control their distribution in the soil/water environment. The work developed focuses the optimization of analytical techniques for monitoring the sorption behaviour of organic pollutants, 17α- ethinylestradiol (EE2) and atrazine, and their fate in aqueous environment. Initially, the development of several analytical techniques, such as micellar electrokinetic chromatography, spectral deconvolution, using UV-Vis and fluorescence spectroscopy, and also enzyme linked immunosorbent assay was performed. Optimization, method performance and recovery tests are described and results discussed. Moreover, in order to evaluate the applicability of the previously optimized method, atrazine and EE2 sorption to soil samples was performed. The work developed provide several options, in terms of methodology to follow sorption of atrazine onto soils, however the choice depends on the laboratory conditions and on the analyst preferences. The advantages and disadvantages of each methodology should be evaluated first. The second part of this work consisted in the sorption behaviour study of those two different hydrophobic organic pollutants onto different soil samples. Soil organic matter chemical characterization, being essential to understand the binding mechanism responsible for the interactions, was made. The results of atrazine binding to organic matter pointed out that carboxyl units and aromaticrich organic matter are the most efficient binding agents for atrazine. EE2 adsorbs strongly to soil organic matter and is mainly stabilized by hydrophobic interactions, through aromatic nuclei face to face with surface and/or another EE2 molecule association. Farmyard manure soil contains higher aromatic and carboxyl units, indicating that this type of manure can be effectively used to minimize the residual toxicity of EE2 and atrazine present in soils, increasing the sorption and reducing leaching onto water resources. Since the final destination of organic pollutants can be ground, surface and/or waste water, atrazine and 17α-ethinylestradiol were quantified in several water samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A relação entre a epidemiologia, a modelação matemática e as ferramentas computacionais permite construir e testar teorias sobre o desenvolvimento e combate de uma doença. Esta tese tem como motivação o estudo de modelos epidemiológicos aplicados a doenças infeciosas numa perspetiva de Controlo Ótimo, dando particular relevância ao Dengue. Sendo uma doença tropical e subtropical transmitida por mosquitos, afecta cerca de 100 milhões de pessoas por ano, e é considerada pela Organização Mundial de Saúde como uma grande preocupação para a saúde pública. Os modelos matemáticos desenvolvidos e testados neste trabalho, baseiam-se em equações diferenciais ordinárias que descrevem a dinâmica subjacente à doença nomeadamente a interação entre humanos e mosquitos. É feito um estudo analítico dos mesmos relativamente aos pontos de equilíbrio, sua estabilidade e número básico de reprodução. A propagação do Dengue pode ser atenuada através de medidas de controlo do vetor transmissor, tais como o uso de inseticidas específicos e campanhas educacionais. Como o desenvolvimento de uma potencial vacina tem sido uma aposta mundial recente, são propostos modelos baseados na simulação de um hipotético processo de vacinação numa população. Tendo por base a teoria de Controlo Ótimo, são analisadas as estratégias ótimas para o uso destes controlos e respetivas repercussões na redução/erradicação da doença aquando de um surto na população, considerando uma abordagem bioeconómica. Os problemas formulados são resolvidos numericamente usando métodos diretos e indiretos. Os primeiros discretizam o problema reformulando-o num problema de optimização não linear. Os métodos indiretos usam o Princípio do Máximo de Pontryagin como condição necessária para encontrar a curva ótima para o respetivo controlo. Nestas duas estratégias utilizam-se vários pacotes de software numérico. Ao longo deste trabalho, houve sempre um compromisso entre o realismo dos modelos epidemiológicos e a sua tratabilidade em termos matemáticos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Por parte da indústria de estampagem tem-se verificado um interesse crescente em simulações numéricas de processos de conformação de chapa, incluindo também métodos de engenharia inversa. Este facto ocorre principalmente porque as técnicas de tentativa-erro, muito usadas no passado, não são mais competitivas a nível económico. O uso de códigos de simulação é, atualmente, uma prática corrente em ambiente industrial, pois os resultados tipicamente obtidos através de códigos com base no Método dos Elementos Finitos (MEF) são bem aceites pelas comunidades industriais e científicas Na tentativa de obter campos de tensão e de deformação precisos, uma análise eficiente com o MEF necessita de dados de entrada corretos, como geometrias, malhas, leis de comportamento não-lineares, carregamentos, leis de atrito, etc.. Com o objetivo de ultrapassar estas dificuldades podem ser considerados os problemas inversos. No trabalho apresentado, os seguintes problemas inversos, em Mecânica computacional, são apresentados e analisados: (i) problemas de identificação de parâmetros, que se referem à determinação de parâmetros de entrada que serão posteriormente usados em modelos constitutivos nas simulações numéricas e (ii) problemas de definição geométrica inicial de chapas e ferramentas, nos quais o objetivo é determinar a forma inicial de uma chapa ou de uma ferramenta tendo em vista a obtenção de uma determinada geometria após um processo de conformação. São introduzidas e implementadas novas estratégias de otimização, as quais conduzem a parâmetros de modelos constitutivos mais precisos. O objetivo destas estratégias é tirar vantagem das potencialidades de cada algoritmo e melhorar a eficiência geral dos métodos clássicos de otimização, os quais são baseados em processos de apenas um estágio. Algoritmos determinísticos, algoritmos inspirados em processos evolucionários ou mesmo a combinação destes dois são usados nas estratégias propostas. Estratégias de cascata, paralelas e híbridas são apresentadas em detalhe, sendo que as estratégias híbridas consistem na combinação de estratégias em cascata e paralelas. São apresentados e analisados dois métodos distintos para a avaliação da função objetivo em processos de identificação de parâmetros. Os métodos considerados são uma análise com um ponto único ou uma análise com elementos finitos. A avaliação com base num único ponto caracteriza uma quantidade infinitesimal de material sujeito a uma determinada história de deformação. Por outro lado, na análise através de elementos finitos, o modelo constitutivo é implementado e considerado para cada ponto de integração. Problemas inversos são apresentados e descritos, como por exemplo, a definição geométrica de chapas e ferramentas. Considerando o caso da otimização da forma inicial de uma chapa metálica a definição da forma inicial de uma chapa para a conformação de um elemento de cárter é considerado como problema em estudo. Ainda neste âmbito, um estudo sobre a influência da definição geométrica inicial da chapa no processo de otimização é efetuado. Este estudo é realizado considerando a formulação de NURBS na definição da face superior da chapa metálica, face cuja geometria será alterada durante o processo de conformação plástica. No caso dos processos de otimização de ferramentas, um processo de forjamento a dois estágios é apresentado. Com o objetivo de obter um cilindro perfeito após o forjamento, dois métodos distintos são considerados. No primeiro, a forma inicial do cilindro é otimizada e no outro a forma da ferramenta do primeiro estágio de conformação é otimizada. Para parametrizar a superfície livre do cilindro são utilizados diferentes métodos. Para a definição da ferramenta são também utilizados diferentes parametrizações. As estratégias de otimização propostas neste trabalho resolvem eficientemente problemas de otimização para a indústria de conformação metálica.