855 resultados para models of simulation
Resumo:
In this article, we study traffic flow in the presence of speed breaking structures. The speed breakers are typically used to reduce the local speed of vehicles near certain institutions such as schools and hospitals. Through a cellular automata model we study the impact of such structures on global traffic characteristics. The simulation results indicate that the presence of speed breakers could reduce the global flow under moderate global densities. However, under low and high global density traffic regime the presence of speed breakers does not have an impact on the global flow. Further the speed limit enforced by the speed breaker creates a phase distinction. For a given global density and slowdown probability, as the speed limit enforced by the speed breaker increases, the traffic moves from the reduced flow phase to maximum flow phase. This underlines the importance of proper design of these structures to avoid undesired flow restrictions.
Resumo:
We show by numerical simulations that discretized versions of commonly studied continuum nonlinear growth equations (such as the Kardar-Parisi-Zhangequation and the Lai-Das Sarma-Villain equation) and related atomistic models of epitaxial growth have a generic instability in which isolated pillars (or grooves) on an otherwise flat interface grow in time when their height (or depth) exceeds a critical value. Depending on the details of the model, the instability found in the discretized version may or may not be present in the truly continuum growth equation, indicating that the behavior of discretized nonlinear growth equations may be very different from that of their continuum counterparts. This instability can be controlled either by the introduction of higher-order nonlinear terms with appropriate coefficients or by restricting the growth of pillars (or grooves) by other means. A number of such ''controlled instability'' models are studied by simulation. For appropriate choice of the parameters used for controlling the instability, these models exhibit intermittent behavior, characterized by multiexponent scaling of height fluctuations, over the time interval during which the instability is active. The behavior found in this regime is very similar to the ''turbulent'' behavior observed in recent simulations of several one- and two-dimensional atomistic models of epitaxial growth.
Resumo:
A systematic assessment of the submodels of conditional moment closure (CMC) formalism for the autoignition problem is carried out using direct numerical simulation (DNS) data. An initially non-premixed, n-heptane/air system, subjected to a three-dimensional, homogeneous, isotropic, and decaying turbulence, is considered. Two kinetic schemes, (1) a one-step and (2) a reduced four-step reaction mechanism, are considered for chemistry An alternative formulation is developed for closure of the mean chemical source term
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes on a region in Euclidean space, e.g., the unit square. After deployment, the nodes self-organise into a mesh topology. In a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this paper, we analyse the performance of this approximation. We show that nodes with a certain hop distance from a fixed anchor node lie within a certain annulus with probability approach- ing unity as the number of nodes n → ∞. We take a uniform, i.i.d. deployment of n nodes on a unit square, and consider the geometric graph on these nodes with radius r(n) = c q ln n n . We show that, for a given hop distance h of a node from a fixed anchor on the unit square,the Euclidean distance lies within [(1−ǫ)(h−1)r(n), hr(n)],for ǫ > 0, with probability approaching unity as n → ∞.This result shows that it is more likely to expect a node, with hop distance h from the anchor, to lie within this an- nulus centred at the anchor location, and of width roughly r(n), rather than close to a circle whose radius is exactly proportional to h. We show that if the radius r of the ge- ometric graph is fixed, the convergence of the probability is exponentially fast. Similar results hold for a randomised lattice deployment. We provide simulation results that il- lustrate the theory, and serve to show how large n needs to be for the asymptotics to be useful.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.
Resumo:
The use of free vibration in elastic structure can lead to energy-efficient robot locomotion, since it significantly reduces the energy expenditure if properly designed and controlled. However, it is not well understood how to harness the dynamics of free vibration for the robot locomotion, because of the complex dynamics originated in discrete events and energy dissipation during locomotion. From this perspective, the goals of this paper are to propose a design strategy of hopping robot based on elastic curved beams and actuated rotating masses and to identify the minimalistic model that can characterize the basic principle of robot locomotion. Since the robot mainly exhibits vertical hopping, three 1-D models are examined that contain different configurations of simple spring-damper-mass components. The real-world and simulation experiments show that one of the models best characterizes the robot hopping, through analyzing the basic kinematics and negative works in actuation. Based on this model, the self-stability of hopping motion under disturbances is investigated, and design and control parameters are analyzed for the energy-efficient hopping. In addition, further analyses show that this robot can achieve the energy-efficient hopping with the variation in payload, and the source of energy dissipation of the robot hopping is investigated. © 1982-2012 IEEE.
Resumo:
The use of free vibration in elastic structure can lead to energy efficient robot locomotion, since it significantly reduces the energy expenditure if properly designed and controlled. However, it is not well understood how to harness the dynamics of free vibration for the robot locomotion, because of the complex dynamics originated in discrete events and energy dissipation during locomotion. From this perspective, this paper explores three minimalistic models of free vibration that can characterize the basic principle of robot locomotion. Since the robot mainly exhibits vertical hopping, three one-dimensional models are examined that contain different configurations of simple spring-damper-mass components. The self-stability of these models are also investigated in simulation. The real-world and simulation experiments show that one of the models best characterizes the robot hopping, through analyzing the basic kinematics and negative works in actuation. Based on this model, the control parameters are analyzed for the energy efficient hopping. © 2013 IEEE.
Resumo:
Problems in the preservation of the quality of granular material products are complex and arise from a series of sources during transport and storage. In either designing a new plant or, more likely, analysing problems that give rise to product quality degradation in existing operations, practical measurement and simulation tools and technologies are required to support the process engineer. These technologies are required to help in both identifying the source of such problems and then designing them out. As part of a major research programme on quality in particulate manufacturing computational models have been developed for segregation in silos, degradation in pneumatic conveyors, and the development of caking during storage, which use where possible, micro-mechanical relationships to characterize the behaviour of granular materials. The objective of the work presented here is to demonstrate the use of these computational models of unit processes involved in the analysis of large-scale processes involving the handling of granular materials. This paper presents a set of simulations of a complete large-scale granular materials handling operation, involving the discharge of the materials from a silo, its transport through a dilute-phase pneumatic conveyor, and the material storage in a big bag under varying environmental temperature and humidity conditions. Conclusions are drawn on the capability of the computational models to represent key granular processes, including particle size segregation, degradation, and moisture migration caking.
Resumo:
Despite the simultaneous progress of traffic modelling both on the macroscopic and microscopic front, recent works [E. Bourrel, J.B. Lessort, Mixing micro and macro representation of traffic flow: a hybrid model based on the LWR theory, Transport. Res. Rec. 1852 (2003) 193–200; D. Helbing, M. Treiber, Critical discussion of “synchronized flow”, Coop. Transport. Dyn. 1 (2002) 2.1–2.24; A. Hennecke, M. Treiber, D. Helbing, Macroscopic simulations of open systems and micro–macro link, in: D. Helbing, H.J. Herrmann, M. Schreckenberg, D.E. Wolf (Eds.), Traffic and Granular Flow ’99, Springer, Berlin, 2000, pp. 383–388] highlighted that one of the most promising way to simulate efficiently traffic flow on large road networks is a clever combination of both traffic representations: the hybrid modelling. Our focus in this paper is to propose two hybrid models for which the macroscopic (resp. mesoscopic) part is based on a class of second order model [A. Aw, M. Rascle, Resurection of second order models of traffic flow?, SIAM J. Appl. Math. 60 (2000) 916–938] whereas the microscopic part is a Follow-the Leader type model [D.C. Gazis, R. Herman, R.W. Rothery, Nonlinear follow-the-leader models of traffic flow, Oper. Res. 9 (1961) 545–567; R. Herman, I. Prigogine, Kinetic Theory of Vehicular Traffic, American Elsevier, New York, 1971]. For the first hybrid model, we define precisely the translation of boundary conditions at interfaces and for the second one we explain the synchronization processes. Furthermore, through some numerical simulations we show that the waves propagation is not disturbed and the mass is accurately conserved when passing from one traffic representation to another.
Resumo:
Polypropylene (PP), a semi-crystalline material, is typically solid phase thermoformed at temperatures associated with crystalline melting, generally in the 150° to 160°Celsius range. In this very narrow thermoforming window the mechanical properties of the material rapidly decline with increasing temperature and these large changes in properties make Polypropylene one of the more difficult materials to process by thermoforming. Measurement of the deformation behaviour of a material under processing conditions is particularly important for accurate numerical modelling of thermoforming processes. This paper presents the findings of a study into the physical behaviour of industrial thermoforming grades of Polypropylene. Practical tests were performed using custom built materials testing machines and thermoforming equipment at Queen′s University Belfast. Numerical simulations of these processes were constructed to replicate thermoforming conditions using industry standard Finite Element Analysis software, namely ABAQUS and custom built user material model subroutines. Several variant constitutive models were used to represent the behaviour of the Polypropylene materials during processing. This included a range of phenomenological, rheological and blended constitutive models. The paper discusses approaches to modelling industrial plug-assisted thermoforming operations using Finite Element Analysis techniques and the range of material models constructed and investigated. It directly compares practical results to numerical predictions. The paper culminates discussing the learning points from using Finite Element Methods to simulate the plug-assisted thermoforming of Polypropylene, which presents complex contact, thermal, friction and material modelling challenges. The paper makes recommendations as to the relative importance of these inputs in general terms with regard to correlating to experimentally gathered data. The paper also presents recommendations as to the approaches to be taken to secure simulation predictions of improved accuracy.
Resumo:
We propose simple models to predict the performance degradation of disk requests due to storage device contention in consolidated virtualized environments. Model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same server. We first propose a trace-driven approach that evaluates a queueing network with fair share scheduling using simulation. The model parameters consider Virtual Machine Monitor level disk access optimizations and rely on a calibration technique. We further present a measurement-based approach that allows a distinct characterization of read/write performance attributes. In particular, we define simple linear prediction models for I/O request mean response times, throughputs and read/write mixes, as well as a simulation model for predicting response time distributions. We found our models to be effective in predicting such quantities across a range of synthetic and emulated application workloads.
Resumo:
Well planned natural ventilation strategies and systems in the built environments may provide healthy and comfortable indoor conditions, while contributing to a significant reduction in the energy consumed by buildings. Computational Fluid Dynamics (CFD) is particularly suited for modelling indoor conditions in naturally ventilated spaces, which are difficult to predict using other types of building simulation tools. Hence, accurate and reliable CFD models of naturally ventilated indoor spaces are necessary to support the effective design and operation of indoor environments in buildings. This paper presents a formal calibration methodology for the development of CFD models of naturally ventilated indoor environments. The methodology explains how to qualitatively and quantitatively verify and validate CFD models, including parametric analysis utilising the response surface technique to support a robust calibration process. The proposed methodology is demonstrated on a naturally ventilated study zone in the library building at the National University of Ireland in Galway. The calibration process is supported by the on-site measurements performed in a normally operating building. The measurement of outdoor weather data provided boundary conditions for the CFD model, while a network of wireless sensors supplied air speeds and air temperatures inside the room for the model calibration. The concepts and techniques developed here will enhance the process of achieving reliable CFD models that represent indoor spaces and provide new and valuable information for estimating the effect of the boundary conditions on the CFD model results in indoor environments. © 2012 Elsevier Ltd.
Resumo:
The adsorption of gases on microporous carbons is still poorly understood, partly because the structure of these carbons is not well known. Here, a model of microporous carbons based on fullerene- like fragments is used as the basis for a theoretical study of Ar adsorption on carbon. First, a simulation box was constructed, containing a plausible arrangement of carbon fragments. Next, using a new Monte Carlo simulation algorithm, two types of carbon fragments were gradually placed into the initial structure to increase its microporosity. Thirty six different microporous carbon structures were generated in this way. Using the method proposed recently by Bhattacharya and Gubbins ( BG), the micropore size distributions of the obtained carbon models and the average micropore diameters were calculated. For ten chosen structures, Ar adsorption isotherms ( 87 K) were simulated via the hyper- parallel tempering Monte Carlo simulation method. The isotherms obtained in this way were described by widely applied methods of microporous carbon characterisation, i. e. Nguyen and Do, Horvath - Kawazoe, high- resolution alpha(a)s plots, adsorption potential distributions and the Dubinin - Astakhov ( DA) equation. From simulated isotherms described by the DA equation, the average micropore diameters were calculated using empirical relationships proposed by different authors and they were compared with those from the BG method.
Resumo:
In recent years a number of chemistry-climate models have been developed with an emphasis on the stratosphere. Such models cover a wide range of time scales of integration and vary considerably in complexity. The results of specific diagnostics are here analysed to examine the differences amongst individual models and observations, to assess the consistency of model predictions, with a particular focus on polar ozone. For example, many models indicate a significant cold bias in high latitudes, the “cold pole problem”, particularly in the southern hemisphere during winter and spring. This is related to wave propagation from the troposphere which can be improved by improving model horizontal resolution and with the use of non-orographic gravity wave drag. As a result of the widely differing modelled polar temperatures, different amounts of polar stratospheric clouds are simulated which in turn result in varying ozone values in the models. The results are also compared to determine the possible future behaviour of ozone, with an emphasis on the polar regions and mid-latitudes. All models predict eventual ozone recovery, but give a range of results concerning its timing and extent. Differences in the simulation of gravity waves and planetary waves as well as model resolution are likely major sources of uncertainty for this issue. In the Antarctic, the ozone hole has probably reached almost its deepest although the vertical and horizontal extent of depletion may increase slightly further over the next few years. According to the model results, Antarctic ozone recovery could begin any year within the range 2001 to 2008. The limited number of models which have been integrated sufficiently far indicate that full recovery of ozone to 1980 levels may not occur in the Antarctic until about the year 2050. For the Arctic, most models indicate that small ozone losses may continue for a few more years and that recovery could begin any year within the range 2004 to 2019. The start of ozone recovery in the Arctic is therefore expected to appear later than in the Antarctic.
Resumo:
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.