932 resultados para Metals - Formability - Simulation methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photocopy. Springfield, Va., Distributed by Clearinghouse for Federal Scientific and Technical Information [1969]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (C) 2004 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cellular mobile radio systems will be of increasing importance in the future. This thesis describes research work concerned with the teletraffic capacity and the canputer control requirements of such systems. The work involves theoretical analysis and experimental investigations using digital computer simulation. New formulas are derived for the congestion in single-cell systems in which there are both land-to-mobile and mobile-to-mobile calls and in which mobile-to-mobile calls go via the base station. Two approaches are used, the first yields modified forms of the familiar Erlang and Engset formulas, while the second gives more complicated but more accurate formulas. The results of computer simulations to establish the accuracy of the formulas are described. New teletraffic formulas are also derived for the congestion in multi -cell systems. Fixed, dynamic and hybrid channel assignments are considered. The formulas agree with previously published simulation results. Simulation programs are described for the evaluation of the speech traffic of mobiles and for the investigation of a possible computer network for the control of the speech traffic. The programs were developed according to the structured progranming approach leading to programs of modular construction. Two simulation methods are used for the speech traffic: the roulette method and the time-true method. The first is economical but has some restriction, while the second is expensive but gives comprehensive answers. The proposed control network operates at three hierarchical levels performing various control functions which include: the setting-up and clearing-down of calls, the hand-over of calls between cells and the address-changing of mobiles travelling between cities. The results demonstrate the feasibility of the control netwvork and indicate that small mini -computers inter-connected via voice grade data channels would be capable of providing satisfactory control

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a two-dimensional approach of risk assessment method based on the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The risk is calculated using Monte Carlo simulation methods whereby synthetic contaminant source terms were generated to the same distribution as historically occurring pollution events or a priori potential probability distribution. The spatial and temporal distributions of the generated contaminant concentrations at pre-defined monitoring points within the aquifer were then simulated from repeated realisations using integrated mathematical models. The number of times when user defined ranges of concentration magnitudes were exceeded is quantified as risk. The utilities of the method were demonstrated using hypothetical scenarios, and the risk of pollution from a number of sources all occurring by chance together was evaluated. The results are presented in the form of charts and spatial maps. The generated risk maps show the risk of pollution at each observation borehole, as well as the trends within the study area. This capability to generate synthetic pollution events from numerous potential sources of pollution based on historical frequency of their occurrence proved to be a great asset to the method, and a large benefit over the contemporary methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As more of the economy moves from traditional manufacturing to the service sector, the nature of work is becoming less tangible and thus, the representation of human behaviour in models is becoming more important. Representing human behaviour and decision making in models is challenging, both in terms of capturing the essence of the processes, and also the way that those behaviours and decisions are or can be represented in the models themselves. In order to advance understanding in this area, a useful first step is to evaluate and start to classify the various types of behaviour and decision making that are required to be modelled. This talk will attempt to set out and provide an initial classification of the different types of behaviour and decision making that a modeller might want to represent in a model. Then, it will be useful to start to assess the main methods of simulation in terms of their capability in representing these various aspects. The three main simulation methods, System Dynamics, Agent Based Modelling and Discrete Event Simulation all achieve this to varying degrees. There is some evidence that all three methods can, within limits, represent the key aspects of the system being modelled. The three simulation approaches are then assessed for their suitability in modelling these various aspects. Illustration of behavioural modelling will be provided from cases in supply chain management, evacuation modelling and rail disruption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past 50 years there has been considerable progress in our understanding of biomolecular interactions at an atomic level. This in turn has allowed molecular simulation methods employing full atomistic modeling at ever larger scales to develop. However, some challenging areas still remain where there is either a lack of atomic resolution structures or where the simulation system is inherently complex. An area where both challenges are present is that of membranes containing membrane proteins. In this review we analyse a new practical approach to membrane protein study that offers a potential new route to high resolution structures and the possibility to simplify simulations. These new approaches collectively recognise that preservation of the interaction between the membrane protein and the lipid bilayer is often essential to maintain structure and function. The new methods preserve these interactions by producing nano-scale disc shaped particles that include bilayer and the chosen protein. Currently two approaches lead in this area: the MSP system that relies on peptides to stabilise the discs, and SMALPs where an amphipathic styrene maleic acid copolymer is used. Both methods greatly enable protein production and hence have the potential to accelerate atomic resolution structure determination as well as providing a simplified format for simulations of membrane protein dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sol-gel-synthesized bioactive glasses may be formed via a hydrolysis condensation reaction, silica being introduced in the form of tetraethyl orthosilicate (TEOS), and calcium is typically added in the form of calcium nitrate. The synthesis reaction proceeds in an aqueous environment; the resultant gel is dried, before stabilization by heat treatment. These materials, being amorphous, are complex at the level of their atomic-scale structure, but their bulk properties may only be properly understood on the basis of that structural insight. Thus, a full understanding of their structure-property relationship may only be achieved through the application of a coherent suite of leading-edge experimental probes, coupled with the cogent use of advanced computer simulation methods. Using as an exemplar a calcia-silica sol-gel glass of the kind developed by Larry Hench, in the memory of whom this paper is dedicated, we illustrate the successful use of high-energy X-ray and neutron scattering (diffraction) methods, magic-angle spinning solid-state NMR, and molecular dynamics simulation as components to a powerful methodology for the study of amorphous materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Portland cement being very common construction material has in its composition the natural gypsum. To decrease the costs of manufacturing, the cement industry is substituting the gypsum in its composition by small quantities of phosphogypsum, which is the residue generated by the production of fertilizers and consists essentially of calcium dihydrate and some impurities, such as fluoride, metals in general, and radionuclides. Currently, tons of phosphogypsum are stored in the open air near the fertilizer industries, causing contamination of the environment. The 226 Ra present in these materials, when undergoes radioactive decay, produces the 222Rn gas. This radioactive gas, when inhaled together with its decay products deposited in the lungs, produces the exposure to radiation and can be a potential cause of lung cancer. Thus, the objective of this study was to measure the concentration levels of 222Rn from cylindrical samples of Portland cement, gypsum and phosphogypsum mortar from the state of Paraná, as well as characterizer the material and estimate the radon concentration in an environment of hypothetical dwelling with walls covered by such materials. Experimental setup of 222Rn activity measurements was based on AlphaGUARD detector (Saphymo GmbH). The qualitative and quantitative analysis was performed by gamma spectrometry and EDXRF with Au and Ag targets tubes (AMPTEK), and Mo target (ARTAX) and mechanical testing with x- ray equipment (Gilardoni) and the mechanical press (EMIC). Obtained average values of radon activity from studied materials in the air of containers were of 854 ± 23 Bq/m3, 60,0 ± 7,2 Bq/m3 e 52,9 ± 5,4 Bq/m3 for Portland cement, gypsum and phosphogypsum mortar, respectively. These results extrapolated into the volume of hypothetical dwelling of 36 m3 with the walls covered by such materials were of 3366 ± 91 Bq/m3, 237 ± 28 Bq/m3 e 208 ± 21 Bq/m3for Portland cement, gypsum and phosphogypsum mortar, respectively. Considering the limit of 300 Bq/m3 established by the ICRP, it could be concluded that the use of Portland cement plaster in dwellings is not secure and requires some specific mitigation procedure. Using the results of gamma spectrometry there were calculated the values of radium equivalent activity concentrations (Raeq) for Portland cement, gypsum and phosphogypsum mortar, which were obtained equal to 78,2 ± 0,9 Bq/kg; 58,2 ± 0,9 Bq/kg e 68,2 ± 0,9 Bq/kg, respectively. All values of radium equivalent activity concentrations for studied samples are below the maximum level of 370 Bq/kg. The qualitative and quantitative analysis of EDXRF spectra obtained with studied mortar samples allowed to evaluate quantitate and the elements that constitute the material such as Ca, S, Fe, and others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.

Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.

In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 13: Virtual Reality and Simulation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Non-fatal health outcomes from diseases and injuries are a crucial consideration in the promotion and monitoring of individual and population health. The Global Burden of Disease (GBD) studies done in 1990 and 2000 have been the only studies to quantify non-fatal health outcomes across an exhaustive set of disorders at the global and regional level. Neither effort quantified uncertainty in prevalence or years lived with disability (YLDs). Methods Of the 291 diseases and injuries in the GBD cause list, 289 cause disability. For 1160 sequelae of the 289 diseases and injuries, we undertook a systematic analysis of prevalence, incidence, remission, duration, and excess mortality. Sources included published studies, case notification, population-based cancer registries, other disease registries, antenatal clinic serosurveillance, hospital discharge data, ambulatory care data, household surveys, other surveys, and cohort studies. For most sequelae, we used a Bayesian meta-regression method, DisMod-MR, designed to address key limitations in descriptive epidemiological data, including missing data, inconsistency, and large methodological variation between data sources. For some disorders, we used natural history models, geospatial models, back-calculation models (models calculating incidence from population mortality rates and case fatality), or registration completeness models (models adjusting for incomplete registration with health-system access and other covariates). Disability weights for 220 unique health states were used to capture the severity of health loss. YLDs by cause at age, sex, country, and year levels were adjusted for comorbidity with simulation methods. We included uncertainty estimates at all stages of the analysis. Findings Global prevalence for all ages combined in 2010 across the 1160 sequelae ranged from fewer than one case per 1 million people to 350 000 cases per 1 million people. Prevalence and severity of health loss were weakly correlated (correlation coefficient −0·37). In 2010, there were 777 million YLDs from all causes, up from 583 million in 1990. The main contributors to global YLDs were mental and behavioural disorders, musculoskeletal disorders, and diabetes or endocrine diseases. The leading specific causes of YLDs were much the same in 2010 as they were in 1990: low back pain, major depressive disorder, iron-deficiency anaemia, neck pain, chronic obstructive pulmonary disease, anxiety disorders, migraine, diabetes, and falls. Age-specific prevalence of YLDs increased with age in all regions and has decreased slightly from 1990 to 2010. Regional patterns of the leading causes of YLDs were more similar compared with years of life lost due to premature mortality. Neglected tropical diseases, HIV/AIDS, tuberculosis, malaria, and anaemia were important causes of YLDs in sub-Saharan Africa. Interpretation Rates of YLDs per 100 000 people have remained largely constant over time but rise steadily with age. Population growth and ageing have increased YLD numbers and crude rates over the past two decades. Prevalences of the most common causes of YLDs, such as mental and behavioural disorders and musculoskeletal disorders, have not decreased. Health systems will need to address the needs of the rising numbers of individuals with a range of disorders that largely cause disability but not mortality. Quantification of the burden of non-fatal health outcomes will be crucial to understand how well health systems are responding to these challenges. Effective and affordable strategies to deal with this rising burden are an urgent priority for health systems in most parts of the world. Funding Bill & Melinda Gates Foundation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has become more and more demanding to investigate the impacts of wind farms on power system operation as ever-increasing penetration levels of wind power have the potential to bring about a series of dynamic stability problems for power systems. This paper undertakes such an investigation through investigating the small signal and transient stabilities of power systems that are separately integrated with three types of wind turbine generators (WTGs), namely the squirrel cage induction generator (SCIG), the doubly fed induction generator (DFIG), and the permanent magnet generator (PMG). To examine the effects of these WTGs on a power system with regard to its stability under different operating conditions, a selected synchronous generator (SG) of the well-known Western Electricity Coordinating Council (WECC three-unit nine-bus system and an eight-unit 24-bus system is replaced in turn by each type of WTG with the same capacity. The performances of the power system in response to the disturbances are then systematically compared. Specifically, the following comparisons are undertaken: (1) performances of the power system before and after the integration of the WTGs; and (2) performances of the power system and the associated consequences when the SCIG, DFIG, or PMG are separately connected to the system. These stability case studies utilize both eigenvalue analysis and dynamic time-domain simulation methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Living cells are the functional unit of organs that controls reactions to their exterior. However, the mechanics of living cells can be difficult to characterize due to the crypticity of their microscale structures and associated dynamic cellular processes. Fortunately, multiscale modelling provides a powerful simulation tool that can be used to study the mechanical properties of these soft hierarchical, biological systems. This paper reviews recent developments in hierarchical multiscale modeling technique that aimed at understanding cytoskeleton mechanics. Discussions are expanded with respects to cytoskeletal components including: intermediate filaments, microtubules and microfilament networks. The mechanical performance of difference cytoskeleton components are discussed with respect to their structural and material properties. Explicit granular simulation methods are adopted with different coarse-grained strategies for these cytoskeleton components and the simulation details are introduced in this review.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the architecture of a fault-tolerant, special-purpose multi-microprocessor system for solving Partial Differential Equations (PDEs). The modular nature of the architecture allows the use of hundreds of Processing Elements (PEs) for high throughput. Its performance is evaluated by both analytical and simulation methods. The results indicate that the system can achieve high operation rates and is not sensitive to inter-processor communication delay.