983 resultados para Solution techniques
Resumo:
This paper presents the first full-fledged branch-and-price (bap) algorithm for the capacitated arc-routing problem (CARP). Prior exact solution techniques either rely on cutting planes or the transformation of the CARP into a node-routing problem. The drawbacks are either models with inherent symmetry, dense underlying networks, or a formulation where edge flows in a potential solution do not allow the reconstruction of unique CARP tours. The proposed algorithm circumvents all these drawbacks by taking the beneficial ingredients from existing CARP methods and combining them in a new way. The first step is the solution of the one-index formulation of the CARP in order to produce strong cuts and an excellent lower bound. It is known that this bound is typically stronger than relaxations of a pure set-partitioning CARP model.rnSuch a set-partitioning master program results from a Dantzig-Wolfe decomposition. In the second phase, the master program is initialized with the strong cuts, CARP tours are iteratively generated by a pricing procedure, and branching is required to produce integer solutions. This is a cut-first bap-second algorithm and its main function is, in fact, the splitting of edge flows into unique CARP tours.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the central bank should adopt a loss function that differs from the social loss function. Carefully designing the central bank s loss function with consistent targets can harmonize optimal and consistent policy. This desirable result emerges from two observations. First, the social loss function reflects a normative process that does not necessarily prove consistent with the structure of the microeconomy. Thus, the social loss function cannot serve as a direct loss function for the central bank. Second, an optimal loss function for the central bank must depend on the structure of that microeconomy. In addition, this paper shows that control theory provides a benchmark for institution design in a game-theoretical framework.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the social loss function cannot serve as a direct loss function for the central bank. Accordingly, we employ implementation theory to design a central bank loss function (mechanism design) with consistent targets, while the social loss function serves as a social welfare criterion. That is, with the correct mechanism design for the central bank loss function, optimal policy and consistent policy become identical. In other words, optimal policy proves implementable (consistent).
Resumo:
The design of shell and spatial structures represents an important challenge even with the use of the modern computer technology.If we concentrate in the concrete shell structures many problems must be faced,such as the conceptual and structural disposition, optimal shape design, analysis, construction methods, details etc. and all these problems are interconnected among them. As an example the shape optimization requires the use of several disciplines like structural analysis, sensitivity analysis, optimization strategies and geometrical design concepts. Similar comments can be applied to other space structures such as steel trusses with single or double shape and tension structures. In relation to the analysis the Finite Element Method appears to be the most extended and versatile technique used in the practice. In the application of this method several issues arise. First the derivation of the pertinent shell theory or alternatively the degenerated 3-D solid approach should be chosen. According to the previous election the suitable FE model has to be adopted i.e. the displacement,stress or mixed formulated element. The good behavior of the shell structures under dead loads that are carried out towards the supports by mainly compressive stresses is impaired by the high imperfection sensitivity usually exhibited by these structures. This last effect is important particularly if large deformation and material nonlinearities of the shell may interact unfavorably, as can be the case for thin reinforced shells. In this respect the study of the stability of the shell represents a compulsory step in the analysis. Therefore there are currently very active fields of research such as the different descriptions of consistent nonlinear shell models given by Simo, Fox and Rifai, Mantzenmiller and Buchter and Ramm among others, the consistent formulation of efficient tangent stiffness as the one presented by Ortiz and Schweizerhof and Wringgers, with application to concrete shells exhibiting creep behavior given by Scordelis and coworkers; and finally the development of numerical techniques needed to trace the nonlinear response of the structure. The objective of this paper is concentrated in the last research aspect i.e. in the presentation of a state-of-the-art on the existing solution techniques for nonlinear analysis of structures. In this presentation the following excellent reviews on this subject will be mainly used.
Resumo:
Hazard and risk assessment of landslides with potentially long run-out is becoming more and more important. Numerical tools exploiting different constitutive models, initial data and numerical solution techniques are important for making the expert’s assessment more objective, even though they cannot substitute for the expert’s understanding of the site-specific conditions and the involved processes. This paper presents a depth-integrated model accounting for pore water pressure dissipation and applications both to real events and problems for which analytical solutions exist. The main ingredients are: (i) The mathematical model, which includes pore pressure dissipation as an additional equation. This makes possible to model flowslide problems with a high mobility at the beginning, the landslide mass coming to rest once pore water pressures dissipate. (ii) The rheological models describing basal friction: Bingham, frictional, Voellmy and cohesive-frictional viscous models. (iii) We have implemented simple erosion laws, providing a comparison between the approaches of Egashira, Hungr and Blanc. (iv) We propose a Lagrangian SPH model to discretize the equations, including pore water pressure information associated to the moving SPH nodes
Resumo:
Granulation is one of the fundamental operations in particulate processing and has a very ancient history and widespread use. Much fundamental particle science has occurred in the last two decades to help understand the underlying phenomena. Yet, until recently the development of granulation systems was mostly based on popular practice. The use of process systems approaches to the integrated understanding of these operations is providing improved insight into the complex nature of the processes. Improved mathematical representations, new solution techniques and the application of the models to industrial processes are yielding better designs, improved optimisation and tighter control of these systems. The parallel development of advanced instrumentation and the use of inferential approaches provide real-time access to system parameters necessary for improvements in operation. The use of advanced models to help develop real-time plant diagnostic systems provides further evidence of the utility of process system approaches to granulation processes. This paper highlights some of those aspects of granulation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This thesis covers both experimental and computer investigations into the dynamic behaviour of mechanical seals. The literature survey shows no investigations on the effect of vibration on mechanical seals of the type common in the various process industries. Typical seal designs are discussed. A form of Reynolds' equation has been developed that permits the calculation of stiffnesses and damping coefficients for the fluid film. The dynamics of the mechanical seal floating ring have been investigated using approximate formulae, and it has been shown that the floating ring will behave as a rigid body. Some elements, such as the radial damping due to the fluid film, are small and may be neglected. The equations of motion of the floating ring have been developed utilising the significant elements, and a solution technique described. The stiffness and damping coefficients of nitrile rubber o-rings have been obtained. These show a wide variation, with a constant stiffness up to 60 Hz. The importance of the effect of temperature on the properties is discussed. An unsuccessful test rig is described in the appendices. The dynamic behaviour of a mechanical seal has been investigated experimentally, including the effect of changes of speed, sealed pressure and seal geometry. The results, as expected, show that high vibration levels result in both high leakage and seal temperatures. Computer programs have been developed to solve Reynolds' Equation and the equations of motion. Two solution techniques for this latter program were developed, the unsuccesful technique is described in the appendices. Some stability problems were encountered, but despite these the solution shows good agreement with some of the experimental conditions. Possible reasons for the discrepancies are discussed. Various suggestions for future work in this field are given. These include the combining of the programs and more extensive experimental and computer modelling.
Resumo:
This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.
Resumo:
In the past years, we could observe a significant amount of new robotic systems in science, industry, and everyday life. To reduce the complexity of these systems, the industry constructs robots that are designated for the execution of a specific task such as vacuum cleaning, autonomous driving, observation, or transportation operations. As a result, such robotic systems need to combine their capabilities to accomplish complex tasks that exceed the abilities of individual robots. However, to achieve emergent cooperative behavior, multi-robot systems require a decision process that copes with the communication challenges of the application domain. This work investigates a distributed multi-robot decision process, which addresses unreliable and transient communication. This process composed by five steps, which we embedded into the ALICA multi-agent coordination language guided by the PROViDE negotiation middleware. The first step encompasses the specification of the decision problem, which is an integral part of the ALICA implementation. In our decision process, we describe multi-robot problems by continuous nonlinear constraint satisfaction problems. The second step addresses the calculation of solution proposals for this problem specification. Here, we propose an efficient solution algorithm that integrates incomplete local search and interval propagation techniques into a satisfiability solver, which forms a satisfiability modulo theories (SMT) solver. In the third decision step, the PROViDE middleware replicates the solution proposals among the robots. This replication process is parameterized with a distribution method, which determines the consistency properties of the proposals. In a fourth step, we investigate the conflict resolution. Therefore, an acceptance method ensures that each robot supports one of the replicated proposals. As we integrated the conflict resolution into the replication process, a sound selection of the distribution and acceptance methods leads to an eventual convergence of the robot proposals. In order to avoid the execution of conflicting proposals, the last step comprises a decision method, which selects a proposal for implementation in case the conflict resolution fails. The evaluation of our work shows that the usage of incomplete solution techniques of the constraint satisfaction solver outperforms the runtime of other state-of-the-art approaches for many typical robotic problems. We further show by experimental setups and practical application in the RoboCup environment that our decision process is suitable for making quick decisions in the presence of packet loss and delay. Moreover, PROViDE requires less memory and bandwidth compared to other state-of-the-art middleware approaches.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
This paper presents a study of AISI 1040 steel corrosion in aqueous electrolyte of acetic acid buffer containing 3.1 and 31 x 10(-3) mol dm(-3) of Na(2)S in both the presence and absence of 3.5 wt.% NaCl. This investigation of steel corrosion was carried out using potential polarization, and open-circuit and in situ optical microscopy. The morphological analysis and classification of types of surface corrosion damage by digital image processing reveals grain boundary corrosion and shows a non-uniform sulfide film growth, which occurs preferentially over pearlitic grains through successive formation and dissolution of the film. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this study was to evaluate the impact of different disinfection solutions on flexural resistance of chemically-activated acrylic resin. Test pieces were made of clear acrylic resin using a rectangular mold and employing two techniques: wet polymerization under pressure (n = 20) and dry polymerization under pressure (n = 20). Test pieces were subdivided into four equal groups: distilled water (control), sodium bicarbonate, 1% sodium hypochlorite and effervescent ats. The 30-day cycling technique consisted of immersing the test pieces in 100 ml of solution for 10 min three times a day and placing them in closed containers containing artificial saliva at 37°C. Subsequently, the flexural resistance of samples was tested. Data were analyzed using two-way analysis of variance (ANOVA) with forces serving as the dependent variables and the polymerization technique and cleaning agents as independent variables. Post hoc multiple comparisons were performed using Tukey’s test. There was no statistically significant difference in the flexural strength between the two polymerization techniques. The greatest flexural strength was observed for the effervescent tablets group followed by the control and 1% sodium hypochlorite groups which were statistically similar. Thus, the sodium bicarbonate solution caused the lowest flexural resistance of the test pieces.
Resumo:
The importance of medicinal plants and their use in industrial applications is increasing worldwide, especially in Brazil. Phyllanthus species, popularly known as quebra-pedras in Brazil, are used in folk medicine for treating urinary infections and renal calculus. This paper reports an authenticity study, based on herbal drugs from Phyllanthus species, involving commercial and authentic samples using spectroscopic techniques: FT-IR, ¹H HR-MAS NMR and ¹H NMR in solution, combined with chemometric analysis. The spectroscopic techniques evaluated, coupled with chemometric methods, have great potential in the investigation of complex matrices. Furthermore, several metabolites were identified by the NMR techniques.