933 resultados para Metals - Formability - Simulation methods
Resumo:
Biologists are increasingly conscious of the critical role that noise plays in cellular functions such as genetic regulation, often in connection with fluctuations in small numbers of key regulatory molecules. This has inspired the development of models that capture this fundamentally discrete and stochastic nature of cellular biology - most notably the Gillespie stochastic simulation algorithm (SSA). The SSA simulates a temporally homogeneous, discrete-state, continuous-time Markov process, and of course the corresponding probabilities and numbers of each molecular species must all remain positive. While accurately serving this purpose, the SSA can be computationally inefficient due to very small time stepping so faster approximations such as the Poisson and Binomial τ-leap methods have been suggested. This work places these leap methods in the context of numerical methods for the solution of stochastic differential equations (SDEs) driven by Poisson noise. This allows analogues of Euler-Maruyuma, Milstein and even higher order methods to be developed through the Itô-Taylor expansions as well as similar derivative-free Runge-Kutta approaches. Numerical results demonstrate that these novel methods compare favourably with existing techniques for simulating biochemical reactions by more accurately capturing crucial properties such as the mean and variance than existing methods.
Resumo:
The deregulation of power industry worldwide has delivered the efficiency gains to the society; meanwhile, the intensity of competition has increased uncertainty and risks to market participants. Consequently, market participants are keen to hedge the market risks and maintain a competitive edge in the market; and this is a good explanation to the flourish of electricity derivative market. In this paper, the authors gave a comprehensive review of derivative contract pricing methods and proposed a new framework for energy derivative pricing to suit the needs of a deregulated electricity market
Resumo:
We discuss the Application of TAP mean field methods known from Statistical Mechanics of disordered systems to Bayesian classification with Gaussian processes. In contrast to previous applications, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given.
Resumo:
Machine breakdowns are one of the main sources of disruption and throughput fluctuation in highly automated production facilities. One element in reducing this disruption is ensuring that the maintenance team responds correctly to machine failures. It is, however, difficult to determine the current practice employed by the maintenance team, let alone suggest improvements to it. 'Knowledge based improvement' is a methodology that aims to address this issue, by (a) eliciting knowledge on current practice, (b) evaluating that practice and (c) looking for improvements. The methodology, based on visual interactive simulation and artificial intelligence methods, and its application to a Ford engine assembly facility are described. Copyright © 2002 Society of Automotive Engineers, Inc.
Resumo:
The performance of most operations systems is significantly affected by the interaction of human decision-makers. A methodology, based on the use of visual interactive simulation (VIS) and artificial intelligence (AI), is described that aims to identify and improve human decision-making in operations systems. The methodology, known as 'knowledge-based improvement' (KBI), elicits knowledge from a decision-maker via a VIS and then uses AI methods to represent decision-making. By linking the VIS and AI representation, it is possible to predict the performance of the operations system under different decision-making strategies and to search for improved strategies. The KBI methodology is applied to the decision-making surrounding unplanned maintenance operations at a Ford Motor Company engine assembly plant.
Resumo:
Simulation modelling has been used for many years in the manufacturing sector but has now become a mainstream tool in business situations. This is partly because of the popularity of business process re-engineering (BPR) and other process based improvement methods that use simulation to help analyse changes in process design. This textbook includes case studies in both manufacturing and service situations to demonstrate the usefulness of the approach. A further reason for the increasing popularity of the technique is the development of business orientated and user-friendly Windows-based software. This text provides a guide to the use of ARENA, SIMUL8 and WITNESS simulation software systems that are widely used in industry and available to students. Overall this text provides a practical guide to building and implementing the results from a simulation model. All the steps in a typical simulation study are covered including data collection, input data modelling and experimentation.
Resumo:
Open-loop operatlon of the stepping motor exploits the inherent advantages of the machine. For near optimum operation: in this mode, however, an accurate system model is required to facilitate controller design. Such a model must be comprehensive and take account of the non-linearities inherent in the system. The result is a complex formulation which can be made manageable with a computational aid. A digital simulation of a hybrid type stepping motor and its associated drive circuit is proposed. The simulation is based upon a block diagram model which includes reasonable approximations to the major non-linearities. The simulation is shown to yield accurate performance predictions. The determination of the transfer functions is based upon the consideration of the physical processes involved rather than upon direct input-outout measurements. The effects of eddy currents, saturation, hysteresis, drive circuit characteristics and non-linear torque displacement characteristics are considered and methods of determining transfer functions, which take account of these effects, are offered. The static torque displacement characteristic is considered in detail and a model is proposed which predicts static torque for any combination of phase currents and shaft position. Methods of predicting the characteristic directly from machine geometry are investigated. Drive circuit design for high efficiency operation is considered and a model of a bipolar, bilevel circuit is proposed. The transfers between stator voltage and stator current and between stator current and air gap flux are complicated by the effects of eddy currents, saturation and hysteresis. Frequency response methods, combined with average inductance measurements, are shown to yield reasonable transfer functions. The modelling procedure and subsequent digital simulation is concluded to be a powerful method of non-linear analysis.
Resumo:
An investigation is carried out into the design of a small local computer network for eventual implementation on the University of Aston campus. Microprocessors are investigated as a possible choice for use as a node controller for reasons of cost and reliability. Since the network will be local, high speed lines of megabit order are proposed. After an introduction to several well known networks, various aspects of networks are discussed including packet switching, functions of a node and host-node protocol. Chapter three develops the network philosophy with an introduction to microprocessors. Various organisations of microprocessors into multicomputer and multiprocessor systems are discussed, together with methods of achieving reliabls computing. Chapter four presents the simulation model and its implentation as a computer program. The major modelling effort is to study the behaviour of messages queueing for access to the network and the message delay experienced on the network. Use is made of spectral analysis to determine the sampling frequency while Sxponentially Weighted Noving Averages are used for data smoothing.
Resumo:
SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.
Resumo:
An initial review of the subject emphasises the need for improved fuel efficiency in vehicles and the possible role of aluminium in reducing weight. The problems of formability generally in manufacture and of aluminium in particular are discussed in the light of published data. A range of thirteen commercially available sheet aluminium alloys have been compared with respect to mechanical properties as these affect forming processes and behaviour in service. Four alloys were selected for detailed comparison. The formability and strength of these were investigated in terms of underlying mechanisms of deformation as well as the microstructural characteristics of the alloys including texture, particle dispersion, grain size and composition. In overall terms, good combinations of strength and ductility are achievable with alloys of the 2xxx and 6xxx series. Some specific alloys are notably better than others. The strength of formed components is affected by paint baking in the final stages of manufacture. Generally, alloys of the 6xxx family are strengthened while 2xxx and 5xxx become weaker. Some anomalous behaviour exists, however. Work hardening of these alloys appears to show rather abrupt decreases over certain strain ranges which is probably responsible for the relatively low strains at which both diffuse and local necking occur. Using data obtained from extended range tensile tests, the strain distribution in more complex shapes can be successfully modelled using finite element methods.Sheet failure during forming occurs by abrupt shear fracture in many instances. This condition is favoured by states of biaxial tension, surface defects in the form of fine scratches and certain types of crystallographic texture. The measured limit strains of the materials can be understood on the basis of attainment of a critical shear stress for fracture.
Resumo:
Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.
Resumo:
A verification task of proving the equivalence of two descriptions of the same device is examined for the case, when one of the descriptions is partially defined. In this case, the verification task is reduced to checking out whether logical descriptions are equivalent on the domain of the incompletely defined one. Simulation-based approach to solving this task for different vector forms of description representations is proposed. Fast Boolean computations over Boolean and ternary vectors having big sizes underlie the offered methods.
Resumo:
In this paper it is explained how to solve a fully connected N-City travelling salesman problem (TSP) using a genetic algorithm. A crossover operator to use in the simulation of a genetic algorithm (GA) with DNA is presented. The aim of the paper is to follow the path of creating a new computational model based on DNA molecules and genetic operations. This paper solves the problem of exponentially size algorithms in DNA computing by using biological methods and techniques. After individual encoding and fitness evaluation, a protocol of the next step in a GA, crossover, is needed. This paper also shows how to make the GA faster via different populations of possible solutions.