807 resultados para Automotive supplies - Design - Simulation methods
Resumo:
In this paper, I present a number of leading examples in the empirical literature that use simulation-based estimation methods. For each example, I describe the model, why simulation is needed, and how to simulate the relevant object. There is a section on simulation methods and another on simulations-based estimation methods. The paper concludes by considering the significance of each of the examples discussed a commenting on potential future areas of interest.
Resumo:
In this paper the issue of finding uncertainty intervals for queries in a Bayesian Network is reconsidered. The investigation focuses on Bayesian Nets with discrete nodes and finite populations. An earlier asymptotic approach is compared with a simulation-based approach, together with further alternatives, one based on a single sample of the Bayesian Net of a particular finite population size, and another which uses expected population sizes together with exact probabilities. We conclude that a query of a Bayesian Net should be expressed as a probability embedded in an uncertainty interval. Based on an investigation of two Bayesian Net structures, the preferred method is the simulation method. However, both the single sample method and the expected sample size methods may be useful and are simpler to compute. Any method at all is more useful than none, when assessing a Bayesian Net under development, or when drawing conclusions from an ‘expert’ system.
Resumo:
Background: Coronary tortuosity (CT) is a common coronary angiographic finding. Whether CT leads to an apparent reduction in coronary pressure distal to the tortuous segment of the coronary artery is still unknown. The purpose of this study is to determine the impact of CT on coronary pressure distribution by numerical simulation. Methods: 21 idealized models were created to investigate the influence of coronary tortuosity angle (CTA) and coronary tortuosity number (CTN) on coronary pressure distribution. A 2D incompressible Newtonian flow was assumed and the computational simulation was performed using finite volume method. CTA of 30°, 60°, 90°, 120° and CTN of 0, 1, 2, 3, 4, 5 were discussed under both steady and pulsatile conditions, and the changes of outlet pressure and inlet velocity during the cardiac cycle were considered. Results: Coronary pressure distribution was affected both by CTA and CTN. We found that the pressure drop between the start and the end of the CT segment decreased with CTA, and the length of the CT segment also declined with CTA. An increase in CTN resulted in an increase in the pressure drop. Conclusions: Compared to no-CT, CT can results in more decrease of coronary blood pressure in dependence on the severity of tortuosity and severe CT may cause myocardial ischemia.
Resumo:
With Safe Design and Construction of Machinery, the author presents the results of empirical studies into this significant aspect of safety science in a very readable, well-structured format. The book contains 436 references, 17 tables, one figure and a comprehensive index. Liz Bluff addresses a complex and important, but often neglected domain in OHS – the safety of machinery – in a holistic and profound, yet evidence based analysis; with many applied cases from her studies, which make the book accessible and a pleasant lecture. Although research that led to this remarkable publication might have been primarily focused on the regulators, this book can be highly recommended to all OHS academics and practitioners. It provides an important contribution to the body of knowledge in OHS, and establishes one of the few Australian in-depth insights into the significance of machinery producers, rather than machinery users in the wider framework of risk management. The author bases this fresh perspective on the well-established European Machinery Safety guidelines, and grounds her mixed-methods research predominantly in qualitative analysis of motivation and knowledge, which eventually leads to specific safety outcomes. It should be noted that both European and Australian legal aspects are investigated and considered, as both equally apply to many machinery exporters. A detailed description of the research design and methods can be found in an appendix. Overall, the unique combination of quantitative safety performance data and qualitative analysis of safety behaviours form a valuable addition to the understanding of machinery safety. The author must be congratulated on making these complex relationships transparent to the reader through her meticulous inquiry.
Resumo:
AbstractObjectives Decision support tools (DSTs) for invasive species management have had limited success in producing convincing results and meeting users' expectations. The problems could be linked to the functional form of model which represents the dynamic relationship between the invasive species and crop yield loss in the DSTs. The objectives of this study were: a) to compile and review the models tested on field experiments and applied to DSTs; and b) to do an empirical evaluation of some popular models and alternatives. Design and methods This study surveyed the literature and documented strengths and weaknesses of the functional forms of yield loss models. Some widely used models (linear, relative yield and hyperbolic models) and two potentially useful models (the double-scaled and density-scaled models) were evaluated for a wide range of weed densities, maximum potential yield loss and maximum yield loss per weed. Results Popular functional forms include hyperbolic, sigmoid, linear, quadratic and inverse models. Many basic models were modified to account for the effect of important factors (weather, tillage and growth stage of crop at weed emergence) influencing weed–crop interaction and to improve prediction accuracy. This limited their applicability for use in DSTs as they became less generalized in nature and often were applicable to a much narrower range of conditions than would be encountered in the use of DSTs. These factors' effects could be better accounted by using other techniques. Among the model empirically assessed, the linear model is a very simple model which appears to work well at sparse weed densities, but it produces unrealistic behaviour at high densities. The relative-yield model exhibits expected behaviour at high densities and high levels of maximum yield loss per weed but probably underestimates yield loss at low to intermediate densities. The hyperbolic model demonstrated reasonable behaviour at lower weed densities, but produced biologically unreasonable behaviour at low rates of loss per weed and high yield loss at the maximum weed density. The density-scaled model is not sensitive to the yield loss at maximum weed density in terms of the number of weeds that will produce a certain proportion of that maximum yield loss. The double-scaled model appeared to produce more robust estimates of the impact of weeds under a wide range of conditions. Conclusions Previously tested functional forms exhibit problems for use in DSTs for crop yield loss modelling. Of the models evaluated, the double-scaled model exhibits desirable qualitative behaviour under most circumstances.
Resumo:
Design research informs and supports practice by developing knowledge to improve the chances of producing successful products.Training in design research has been poorly supported. Design research uses human and natural/technical sciences, embracing all facets of design; its methods and tools are adapted from both these traditions. However, design researchers are rarely trained in methods from both the traditions. Research in traditional sciences focuses primarily on understanding phenomena related to human, natural, or technical systems. Design research focuses on supporting improvement of such systems, using understanding as a necessary but not sufficient step, and it must embrace methods for both understanding reality and developing support for its improvement. A one-semester, postgraduate-level, credited course that has been offered since 2002, entitled Methodology for Design Research, is described that teaches a methodology for carrying out research into design. Its steps are to clarify research success; to understand relevant phenomena of design and how these influence success; to use this to envision design improvement and develop proposals for supporting improvement; to evaluate support for its influence on success; and, if unacceptable, to modify, support, or improve the understanding of success and its links to the phenomena of design. This paper highlights some major issues about the status of design research and describes how design research methodology addresses these. The teaching material, model of delivery, and evaluation of the course on methodology for design research are discussed.
Resumo:
Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Background: Computational protein design is a rapidly maturing field within structural biology, with the goal of designing proteins with custom structures and functions. Such proteins could find widespread medical and industrial applications. Here, we have adapted algorithms from the Rosetta software suite to design much larger proteins, based on ideal geometric and topological criteria. Furthermore, we have developed techniques to incorporate symmetry into designed structures. For our first design attempt, we targeted the (alpha/beta)(8) TIM barrel scaffold. We gained novel insights into TIM barrel folding mechanisms from studying natural TIM barrel structures, and from analyzing previous TIM barrel design attempts. Methods: Computational protein design and analysis was performed using the Rosetta software suite and custom scripts. Genes encoding all designed proteins were synthesized and cloned on the pET20-b vector. Standard circular dichroism and gel chromatographic experiments were performed to determine protein biophysical characteristics. 1D NMR and 2D HSQC experiments were performed to determine protein structural characteristics. Results: Extensive protein design simulations coupled with ab initio modeling yielded several all-atom models of ideal, 4-fold symmetric TIM barrels. Four such models were experimentally characterized. The best designed structure (Symmetrin-1) contained a polar, histidine-rich pore, forming an extensive hydrogen bonding network. Symmetrin-1 was easily expressed and readily soluble. It showed circular dichroism spectra characteristic of well-folded alpha/beta proteins. Temperature melting experiments revealed cooperative and reversible unfolding, with a T-m of 44 degrees C and a Gibbs free energy of unfolding (Delta G degrees) of 8.0 kJ/mol. Urea denaturing experiments confirmed these observations, revealing a C-m of 1.6 M and a Delta G degrees of 8.3 kJ/mol. Symmetrin-1 adopted a monomeric conformation, with an apparent molecular weight of 32.12 kDa, and displayed well resolved 1D-NMR spectra. However, the HSQC spectrum revealed somewhat molten characteristics. Conclusions: Despite the detection of molten characteristics, the creation of a soluble, cooperatively folding protein represents an advancement over previous attempts at TIM barrel design. Strategies to further improve Symmetrin-1 are elaborated. Our techniques may be used to create other large, internally symmetric proteins.
Resumo:
This report presents the results from a survey of current practice in the use of design optimization conducted amongst UK companies. The survey was completed by the Design Optimization Group in the Department of Engineering at Cambridge University. The general aims of this research were to understand the current status of design optimization research and practice and to identify ways in which the use of design optimization methods and tools could be improved.
Resumo:
Separating the dynamics of variables that evolve on different timescales is a common assumption in exploring complex systems, and a great deal of progress has been made in understanding chemical systems by treating independently the fast processes of an activated chemical species from the slower processes that proceed activation. Protein motion underlies all biocatalytic reactions, and understanding the nature of this motion is central to understanding how enzymes catalyze reactions with such specificity and such rate enhancement. This understanding is challenged by evidence of breakdowns in the separability of timescales of dynamics in the active site form motions of the solvating protein. Quantum simulation methods that bridge these timescales by simultaneously evolving quantum and classical degrees of freedom provide an important method on which to explore this breakdown. In the following dissertation, three problems of enzyme catalysis are explored through quantum simulation.
Resumo:
In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.
The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.
Resumo:
Os principais constituintes do ar, nitrogênio, oxigênio e argônio, estão cada vez mais presentes nas indústrias, onde são empregados nos processos químicos, para o transporte de alimentos e processamento de resíduos. As duas principais tecnologias para a separação dos componentes do ar são a adsorção e a destilação criogênica. Entretanto, para ambos os processos é necessário que os contaminantes do ar, como o gás carbônico, o vapor dágua e hidrocarbonetos, sejam removidos para evitar problemas operacionais e de segurança. Desta forma, o presente trabalho trata do estudo do processo de pré-purificação de ar utilizando adsorção. Neste sistema a corrente de ar flui alternadamente entre dois leitos adsorvedores para produzir ar purificado continuamente. Mais especificamente, o foco da dissertação corresponde à investigação do comportamento de unidades de pré-purificação tipo PSA (pressure swing adsorption), onde a etapa de dessorção é realizada pela redução da pressão. A análise da unidade de pré-purificação parte da modelagem dos leitos de adsorção através de um sistema de equações diferenciais parciais de balanço de massa na corrente gasosa e no leito. Neste modelo, a relação de equilíbrio relativa à adsorção é descrita pela isoterma de Dubinin-Astakhov estendida para misturas multicomponentes. Para a simulação do modelo, as derivadas espaciais são discretizadas via diferenças finitas e o sistema de equações diferenciais ordinárias resultante é resolvido por um solver apropriado (método das linhas). Para a simulação da unidade em operação, este modelo é acoplado a um algoritmo de convergência relativo às quatro etapas do ciclo de operação: adsorção, despressurização, purga e dessorção. O algoritmo em questão deve garantir que as condições finais da última etapa são equivalentes às condições iniciais da primeira etapa (estado estacionário cíclico). Desta forma, a simulação foi implementada na forma de um código computacional baseado no ambiente de programação Scilab (Scilab 5.3.0, 2010), que é um programa de distribuição gratuita. Os algoritmos de simulação de cada etapa individual e do ciclo completo são finalmente utilizados para analisar o comportamento da unidade de pré-purificação, verificando como o seu desempenho é afetado por alterações nas variáveis de projeto ou operacionais. Por exemplo, foi investigado o sistema de carregamento do leito que mostrou que a configuração ideal do leito é de 50% de alumina seguido de 50% de zeólita. Variáveis do processo foram também analisadas, a pressão de adsorção, a vazão de alimentação e o tempo do ciclo de adsorção, mostrando que o aumento da vazão de alimentação leva a perda da especificação que pode ser retomada reduzindo-se o tempo do ciclo de adsorção. Mostrou-se também que uma pressão de adsorção maior leva a uma maior remoção de contaminantes.
Resumo:
Current design codes for floating offshore structures are based on measures of short-term reliability. That is, a design storm is selected via an extreme value analysis of the environmental conditions and the reliability of the vessel in that design storm is computed. Although this approach yields valuable information on the vessel motions, it does not produce a statistically rigorous assessment of the lifetime probability of failure. An alternative approach is to perform a long-term reliability analysis in which consideration is taken of all sea states potentially encountered by the vessel during the design life. Although permitted as a design approach in current design codes, the associated computational expense generally prevents its use in practice. A new efficient approach to long-term reliability analysis is presented here, the results of which are compared with a traditional short-term analysis for the surge motion of a representative moored FPSO in head seas. This serves to illustrate the failure probabilities actually embedded within current design code methods, and the way in which design methods might be adapted to achieve a specified target safety level.
Resumo:
Iteration is unavoidable in the design process and should be incorporated when planning and managing projects in order to minimize surprises and reduce schedule distortions. However, planning and managing iteration is challenging because the relationships between its causes and effects are complex. Most approaches which use mathematical models to analyze the impact of iteration on the design process focus on a relatively small number of its causes and effects. Therefore, insights derived from these analytical models may not be robust under a broader consideration of potential influencing factors. In this article, we synthesize an explanatory framework which describes the network of causes and effects of iteration identified from the literature, and introduce an analytic approach which combines a task network modeling approach with System Dynamics simulation. Our approach models the network of causes and effects of iteration alongside the process architecture which is required to analyze the impact of iteration on design process performance. We show how this allows managers to assess the impact of changes to process architecture and to management levers which influence iterative behavior, accounting for the fact that these changes can occur simultaneously and can accumulate in non-linear ways. We also discuss how the insights resulting from this analysis can be visualized for easier consumption by project participants not familiar with simulation methods. Copyright © 2010 by ASME.