853 resultados para Search of Optimal Paths
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
In this thesis we have developed a few inventory models in which items are served to the customers after a processing time. This leads to a queue of demand even when items are available. In chapter two we have discussed a problem involving search of orbital customers for providing inventory. Retrial of orbital customers was also considered in that chapter; in chapter 5 also we discussed retrial inventory model which is sans orbital search of customers. In the remaining chapters (3, 4 and 6) we did not consider retrial of customers, rather we assumed the waiting room capacity of the system to be arbitrarily large. Though the models in chapters 3 and 4 differ only in that in the former we consider positive lead time for replenishment of inventory and in the latter the same is assumed to be negligible, we arrived at sharper results in chapter 4. In chapter 6 we considered a production inventory model with production time distribution for a single item and that of service time of a customer following distinct Erlang distributions. We also introduced protection of production and service stages and investigated the optimal values of the number of stages to be protected.
Resumo:
Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.
Resumo:
We consider two–sided many–to–many matching markets in which each worker may work for multiple firms and each firm may hire multiple workers. We study individual and group manipulations in centralized markets that employ (pairwise) stable mechanisms and that require participants to submit rank order lists of agents on the other side of the market. We are interested in simple preference manipulations that have been reported and studied in empirical and theoretical work: truncation strategies, which are the lists obtained by removing a tail of least preferred partners from a preference list, and the more general dropping strategies, which are the lists obtained by only removing partners from a preference list (i.e., no reshuffling). We study when truncation / dropping strategies are exhaustive for a group of agents on the same side of the market, i.e., when each match resulting from preference manipulations can be replicated or improved upon by some truncation / dropping strategies. We prove that for each stable mechanism, truncation strategies are exhaustive for each agent with quota 1 (Theorem 1). We show that this result cannot be extended neither to group manipulations (even when all quotas equal 1 – Example 1), nor to individual manipulations when the agent’s quota is larger than 1 (even when all other agents’ quotas equal 1 – Example 2). Finally, we prove that for each stable mechanism, dropping strategies are exhaustive for each group of agents on the same side of the market (Theorem 2), i.e., independently of the quotas.
Resumo:
The respiratory emission of CO2 from roots is frequently proposed as an attractant that allows soil-dwelling insects to locate host plant roots, but this role has recently become less certain. CO2 is emitted from many sources other than roots, so does not necessarily indicate the presence of host plants, and because of the high density of roots in the upper soil layers, spatial gradients may not always be perceptible by soil-dwelling insects. The role of CO2 in host location was investigated using the clover root weevil Sitona lepidus Gyllenhall and its host plant white clover (Trifolium repens L.) as a model system. Rhizochamber experiments showed that CO2 concentrations were approximately 1000 ppm around the roots of white clover, but significantly decreased with increasing distance from roots. In behavioural experiments, no evidence was found for any attraction by S. lepidus larvae to point emissions of CO2, regardless of emission rates. Fewer than 15% of larvae were attracted to point emissions of CO2, compared with a control response of 17%. However, fractal analysis of movement paths in constant CO2 concentrations demonstrated that searching by S. lepidus larvae significantly intensified when they experienced CO2 concentrations similar to those found around the roots of white clover (i.e. 1000 ppm). It is suggested that respiratory emissions of CO2 may act as a 'search trigger' for S. lepidus, whereby it induces larvae to search a smaller area more intensively, in order to detect location cues that are more specific to their host plant.
Resumo:
1. Insect predators often aggregrate to patches of high prey density and use prey chemicals as cues for oviposition. If prey have mutualistic guardians such as ants, however, then these patches may be less suitable for predators. 2. Ants often tend aphids and defend them against predators such as ladybirds. Here, we show that ants can reduce ladybird performance by destroying eggs and physically attacking larvae and adults. 3. Unless ladybirds are able to defend against ant attacks they are likely to have adaptations to avoid ants. We show that Adalia bipunctata ladybirds not only move away from patches with Lasius niger ants, but also avoid laying eggs in these patches. Furthermore, ladybirds not only respond to ant presence, but also detect ant semiochemicals and alter oviposition strategy accordingly. 4. Ant semiochemicals may signal the extent of ant territories allowing aphid predators to effectively navigate a mosaic landscape of sub-optimal patches in search of less well-defended prey. Such avoidance probably benefits both ants and ladybirds, and the semiochemicals could be regarded as a means of cooperative communication between enemies. 5. Overall, ladybirds respond to a wide range of positive and negative oviposition cues that may trade-off with each other and internal motivation to determine the overall oviposition strategy.
Resumo:
This paper considers left-invariant control systems defined on the Lie groups SU(2) and SO(3). Such systems have a number of applications in both classical and quantum control problems. The purpose of this paper is two-fold. Firstly, the optimal control problem for a system varying on these Lie Groups, with cost that is quadratic in control is lifted to their Hamiltonian vector fields through the Maximum principle of optimal control and explicitly solved. Secondly, the control systems are integrated down to the level of the group to give the solutions for the optimal paths corresponding to the optimal controls. In addition it is shown here that integrating these equations on the Lie algebra su(2) gives simpler solutions than when these are integrated on the Lie algebra so(3).
Resumo:
Classical measures of network connectivity are the number of disjoint paths between a pair of nodes and the size of a minimum cut. For standard graphs, these measures can be computed efficiently using network flow techniques. However, in the Internet on the level of autonomous systems (ASs), referred to as AS-level Internet, routing policies impose restrictions on the paths that traffic can take in the network. These restrictions can be captured by the valley-free path model, which assumes a special directed graph model in which edge types represent relationships between ASs. We consider the adaptation of the classical connectivity measures to the valley-free path model, where it is -hard to compute them. Our first main contribution consists of presenting algorithms for the computation of disjoint paths, and minimum cuts, in the valley-free path model. These algorithms are useful for ASs that want to evaluate different options for selecting upstream providers to improve the robustness of their connection to the Internet. Our second main contribution is an experimental evaluation of our algorithms on four types of directed graph models of the AS-level Internet produced by different inference algorithms. Most importantly, the evaluation shows that our algorithms are able to compute optimal solutions to instances of realistic size of the connectivity problems in the valley-free path model in reasonable time. Furthermore, our experimental results provide information about the characteristics of the directed graph models of the AS-level Internet produced by different inference algorithms. It turns out that (i) we can quantify the difference between the undirected AS-level topology and the directed graph models with respect to fundamental connectivity measures, and (ii) the different inference algorithms yield topologies that are similar with respect to connectivity and are different with respect to the types of paths that exist between pairs of ASs.
Resumo:
First, we survey recent research in the application of optimal tax theory to housing. This work suggests that the under-taxation of housing for owner occupation distorts investment so that owner occupiers are encouraged to over-invest in housing. Simulations of the US economy suggest that this is true there. But, the theoretical work excludes consideration of land and the simulations exclude consideration of taxes other than income taxes. These exclusions are important for the US and UK economies. In the US, the property tax is relatively high. We argue that excluding the property tax is wrong, so that, when the property tax is taken into account, owner occupied housing is not undertaxed in the US. In the UK, property taxes are relatively low but the cost of land has been increasing in real terms for forty years as a result of a policy of constraining land for development. The price of land for housing is now higher than elsewhere. Effectively, an implicit tax is paid by first time buyers which has reduced housing investment. When land is taken into account over-investment in housing is not encouraged in the UK either.
Resumo:
We construct a quasi-sure version (in the sense of Malliavin) of geometric rough paths associated with a Gaussian process with long-time memory. As an application we establish a large deviation principle (LDP) for capacities for such Gaussian rough paths. Together with Lyons' universal limit theorem, our results yield immediately the corresponding results for pathwise solutions to stochastic differential equations driven by such Gaussian process in the sense of rough paths. Moreover, our LDP result implies the result of Yoshida on the LDP for capacities over the abstract Wiener space associated with such Gaussian process.
Resumo:
A structure-dynamic approach to cortical systems is reported which is based on the number of paths and the accessibility of each node. The latter measurement is obtained by performing self-avoiding random walks in the respective networks, so as to simulate dynamics, and then calculating the entropies of the transition probabilities for walks starting from each node. Cortical networks of three species, namely cat, macaque and humans, are studied considering structural and dynamical aspects. It is verified that the human cortical network presents the highest accessibility and number of paths (in terms of z-scores). The correlation between the number of paths and accessibility is also investigated as a mean to quantify the level of independence between paths connecting pairs of nodes in cortical networks. By comparing the cortical networks of cat, macaque and humans, it is verified that the human cortical network tends to present the largest number of independent paths of length larger than four. These results suggest that the human cortical network is potentially the most resilient to brain injures. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
An important feature of life-cycle models is the presence of uncertainty regarding one’s labor income. Yet this issue, long recognized in different areas, has not received enough attention in the optimal taxation literature. This paper is an attempt to fill this gap. We write a simple 3 period model where agents gradually learn their productivities. In a framework akin to Mirrlees’ (1971) static one, we derive properties of optimal tax schedules and show that: i) if preferences are (weakly) separable, uniform taxation of goods is optimal, ii) if they are (strongly) separable capital income is to rate than others forms of investiment.
Resumo:
In almost all cases, the goal of the design of automatic control systems is to obtain the parameters of the controllers, which are described by differential equations. In general, the controller is artificially built and it is possible to update its initial conditions. In the design of optimal quadratic regulators, the initial conditions of the controller can be changed in an optimal way and they can improve the performance of the controlled system. Following this idea, a LNU-based design procedure to update the initial conditions of PI controllers, considering the nonlinear plant described by Takagi-Sugeno fuzzy models, is presented. The importance of the proposed method is that it also allows other specifications, such as, the decay rate and constraints on control input and output. The application in the control of an inverted pendulum illustrates the effectively of proposed method.
Resumo:
An important stage in the solution of active vibration control in flexible structures is the optimal placement of sensors and actuators. In many works, the positioning of these devices in systems governed for parameter distributed is, mainly, based, in controllability approach or criteria of performance. The positions that enhance such parameters are considered optimal. These techniques do not take in account the space variation of disturbances. An way to enhance the robustness of the control design would be to locate the actuators considering the space distribution of the worst case of disturbances. This paper is addressed to include in the formulation of problem of optimal location of sensors and piezoelectric actuators the effect of external disturbances. The paper concludes with a numerical simulation in a truss structure considering that the disturbance is applied in a known point a priori. As objective function the C norm system is used. The LQR (Linear Quadratic Regulator) controller was used to quantify performance of different sensors/actuators configurations.
Resumo:
Problems as voltage increase at the end of a feeder, demand supply unbalance in a fault condition, power quality decline, increase of power losses, and reduction of reliability levels may occur if Distributed Generators (DGs) are not properly allocated. For this reason, researchers have been employed several solution techniques to solve the problem of optimal allocation of DGs. This work is focused on the ancillary service of reactive power support provided by DGs. The main objective is to price this service by determining the costs in which a DG incurs when it loses sales opportunity of active power, i.e, by determining the Loss of Opportunity Costs (LOC). The LOC will be determined for different allocation alternatives of DGs as a result of a multi-objective optimization process, aiming the minimization of losses in the lines of the system and costs of active power generation from DGs, and the maximization of the static voltage stability margin of the system. The effectiveness of the proposed methodology in improving the goals outlined was demonstrated using the IEEE 34 bus distribution test feeder with two DGs cosidered to be allocated. © 2011 IEEE.