901 resultados para stochastic simulation method
Resumo:
Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.
Resumo:
The high capital cost of robots prohibit their economic application. One method of making their application more economic is to increase their operating speed. This can be done in a number of ways e.g. redesign of robot geometry, improving actuators and improving control system design. In this thesis the control system design is considered. It is identified in the literature review that two aspects in relation to robot control system design have not been addressed in any great detail by previous researchers. These are: how significant are the coupling terms in the dynamic equations of the robot and what is the effect of the coupling terms on the performance of a number of typical independent axis control schemes?. The work in this thesis addresses these two questions in detail. A program was designed to automatically calculate the path and trajectory and to calculate the significance of the coupling terms in an example application of a robot manipulator tracking a part on a moving conveyor. The inertial and velocity coupling terms have been shown to be of significance when the manipulator was considered to be directly driven. A simulation of the robot manipulator following the planned trajectory has been established in order to assess the performance of the independent axis control strategies. The inertial coupling was shown to reinforce the control torque at the corner points of the trajectory, where there was an abrupt demand in acceleration in each axis but of opposite sign. This reduced the tracking error however, this effect was not controllable. A second effect was due to the velocity coupling terms. At high trajectory speeds it was shown, by means of a root locus analysis, that the velocity coupling terms caused the system to become unstable.
Resumo:
The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.
Resumo:
SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.
Resumo:
This thesis considers the computer simulation of moist agglomerate collisions using the discrete element method (DEM). The study is confined to pendular state moist agglomerates, at which liquid is presented as either absorbed immobile films or pendular liquid bridges and the interparticle force is modelled as the adhesive contact force and interstitial liquid bridge force. Algorithms used to model the contact force due to surface adhesion, tangential friction and particle deformation have been derived by other researchers and are briefly described in the thesis. A theoretical study of the pendular liquid bridge force between spherical particles has been made and the algorithms for the modelling of the pendular liquid bridge force between spherical particles have been developed and incorporated into the Aston version of the DEM program TRUBAL. It has been found that, for static liquid bridges, the more explicit criterion for specifying the stable solution and critical separation is provided by the total free energy. The critical separation is given by the cube root of liquid bridge volume to a good approximation and the 'gorge method' of evaluation based on the toroidal approximation leads to errors in the calculated force of less than 10%. Three dimensional computer simulations of an agglomerate impacting orthogonally with a wall are reported. The results demonstrate the effectiveness of adding viscous binder to prevent attrition, a common practice in process engineering. Results of simulated agglomerate-agglomerate collisions show that, for colinear agglomerate impacts, there is an optimum velocity which results in a near spherical shape of the coalesced agglomerate and, hence, minimises attrition due to subsequent collisions. The relationship between the optimum impact velocity and the liquid viscosity and surface tension is illustrated. The effect of varying the angle of impact on the coalescence/attrition behaviour is also reported. (DX 187, 340).
Resumo:
Knowledge of the molecular structures of solid dispersions is vital, yet, despite thousands of reports in this area, it remains unclear. The aim of this research is to investigate the molecular structure of solid dispersions with hot melt preparation method by the simulated annealing method. Simulation results showed linear polymer chains form the random coils under heat and the drug molecules stick on the surface of polymer coils, while drug molecules are dispersed molecularly but irregularly within the amorphous low molecular weight carriers. This research presents more reasonable molecular images of solid dispersions than the existed theory.
Resumo:
An iterative procedure is proposed for the reconstruction of a temperature field from a linear stationary heat equation with stochastic coefficients, and stochastic Cauchy data given on a part of the boundary of a bounded domain. In each step, a series of mixed well-posed boundary-value problems are solved for the stochastic heat operator and its adjoint. Well-posedness of these problems is shown to hold and convergence in the mean of the procedure is proved. A discretized version of this procedure, based on a Monte Carlo Galerkin finite-element method, suitable for numerical implementation is discussed. It is demonstrated that the solution to the discretized problem converges to the continuous as the mesh size tends to zero.
Resumo:
Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.
Resumo:
Basic hydrodynamic parameters of an airlift reactor with internal loop were estimated experimentally and simulated using commercially available CFD software from Fluent. Circulation velocity in a 32-dm(3)-airlift reactor was measured using the magnetic tracer method, meanwhile the gas hold-up was obtained by analysis of the pressure drop using the method of inverted U-tube manometers. Comparison of simulated (in two and three dimensions) and experimental data was performed at different superficial gas velocities in the riser.
Resumo:
We extend a meshless method of fundamental solutions recently proposed by the authors for the one-dimensional two-phase inverse linear Stefan problem, to the nonlinear case. In this latter situation the free surface is also considered unknown which is more realistic from the practical point of view. Building on the earlier work, the solution is approximated in each phase by a linear combination of fundamental solutions to the heat equation. The implementation and analysis are more complicated in the present situation since one needs to deal with a nonlinear minimization problem to identify the free surface. Furthermore, the inverse problem is ill-posed since small errors in the input measured data can cause large deviations in the desired solution. Therefore, regularization needs to be incorporated in the objective function which is minimized in order to obtain a stable solution. Numerical results are presented and discussed. © 2014 IMACS.
Resumo:
The effect of having a fixed differential-group delay term in the coarse-step method results in a periodic pattern in the autocorrelation function. We solve this problem by inserting a varying DGD term at each integration step, according to a Gaussian distribution. Simulation results are given to illustrate the phenomenon and provide some evidence, about its statistical nature.
Resumo:
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained.
Resumo:
Accelerated probabilistic modeling algorithms, presenting stochastic local search (SLS) technique, are considered. General algorithm scheme and specific combinatorial optimization method, using “golden section” rule (GS-method), are given. Convergence rates using Markov chains are received. An overview of current combinatorial optimization techniques is presented.
Resumo:
Conventional methods in horizontal drilling processes incorporate magnetic surveying techniques for determining the position and orientation of the bottom-hole assembly (BHA). Such means result in an increased weight of the drilling assembly, higher cost due to the use of non-magnetic collars necessary for the shielding of the magnetometers, and significant errors in the position of the drilling bit. A fiber-optic gyroscope (FOG) based inertial navigation system (INS) has been proposed as an alternative to magnetometer -based downhole surveying. The utilizing of a tactical-grade FOG based surveying system in the harsh downhole environment has been shown to be theoretically feasible, yielding a significant BHA position error reduction (less than 100m over a 2-h experiment). To limit the growing errors of the INS, an in-drilling alignment (IDA) method for the INS has been proposed. This article aims at describing a simple, pneumatics-based design of the IDA apparatus and its implementation downhole. A mathematical model of the setup is developed and tested with Bloodshed Dev-C++. The simulations demonstrate a simple, low cost and feasible IDA apparatus.
Resumo:
Integrated supplier selection and order allocation is an important decision for both designing and operating supply chains. This decision is often influenced by the concerned stakeholders, suppliers, plant operators and customers in different tiers. As firms continue to seek competitive advantage through supply chain design and operations they aim to create optimized supply chains. This calls for on one hand consideration of multiple conflicting criteria and on the other hand consideration of uncertainties of demand and supply. Although there are studies on supplier selection using advanced mathematical models to cover a stochastic approach, multiple criteria decision making techniques and multiple stakeholder requirements separately, according to authors' knowledge there is no work that integrates these three aspects in a common framework. This paper proposes an integrated method for dealing with such problems using a combined Analytic Hierarchy Process-Quality Function Deployment (AHP-QFD) and chance constrained optimization algorithm approach that selects appropriate suppliers and allocates orders optimally between them. The effectiveness of the proposed decision support system has been demonstrated through application and validation in the bioenergy industry.