52 resultados para traffic simulation models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to apply lattice Boltzmann equation method (LBM) with multiple relaxation time (MRT) model, to investigate lid-driven flow in a three-dimensional (3D), rectangular cavity, and compare the results with flow in an equivalent two-dimensional (2D) cavity. Design/methodology/approach - The second-order MRT model is implemented in a 3D LBM code. The flow structure in cavities of different aspect ratios (0.25-4) and Reynolds numbers (0.01-1000) is investigated. The LBM simulation results are compared with those from numerical solution of Navier-Stokes (NS) equations and with available experimental data. Findings - The 3D simulations demonstrate that 2D models may predict the flow structure reasonably well at low Reynolds numbers, but significant differences with experimental data appear at high Reynolds numbers. Such discrepancy between 2D and 3D results are attributed to the effect of boundary layers near the side-walls in transverse direction (in 3D), due to which the vorticity in the core-region is weakened in general. Secondly, owing to the vortex stretching effect present in 3D flow, the vorticity in the transverse plane intensifies whereas that in the lateral plane decays, with increase in Reynolds number. However, on the symmetry-plane, the flow structure variation with respect to cavity aspect ratio is found to be qualitatively consistent with results of 2D simulations. Secondary flow vortices whose axis is in the direction of the lid-motion are observed; these are weak at low. Reynolds numbers, but become quite strong at high Reynolds numbers. Originality/value - The findings will be useful in the study of variety of enclosed fluid flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The network scenario is that of an infrastructure IEEE 802.11 WLAN with a single AP with which several stations (STAs) are associated. The AP has a finite size buffer for storing packets. In this scenario, we consider TCP-controlled upload and download file transfers between the STAs and a server on the wireline LAN (e.g., 100 Mbps Ethernet) to which the AP is connected. In such a situation, it is well known that because of packet losses due to finite buffers at the AP, upload file transfers obtain larger throughputs than download transfers. We provide an analytical model for estimating the upload and download throughputs as a function of the buffer size at the AP. We provide models for the undelayed and delayed ACK cases for a TCP that performs loss recovery only by timeout, and also for TCP Reno. The models are validated incomparison with NS2 simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the ability of a global atmospheric general circulation model (AGCM) to reproduce observed 20 year return values of the annual maximum daily precipitation totals over the continental United States as a function of horizontal resolution. We find that at the high resolutions enabled by contemporary supercomputers, the AGCM can produce values of comparable magnitude to high quality observations. However, at the resolutions typical of the coupled general circulation models used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, the precipitation return values are severely underestimated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Well injection replenishes depleting water levels in a well field. Observation well water levels some distance away from the injection well are the indicators of the success of a well injection program. Simulation of the observation well response, located a few tens of meters from the injection well, is likely to be affected by the effects of nonhomogeneous medium, inclined initial water table, and aquifer clogging. Existing algorithms, such as the U.S. Geological Survey groundwater flow software MODFLOW, are capable of handling the first two conditions, whereas time-dependent clogging effects are yet to be introduced in the groundwater flow models. Elsewhere, aquifer clogging is extensively researched in theory of filtration; scope for its application in a well field is a potential research problem. In the present paper, coupling of one such filtration theory to MODFLOW is introduced. Simulation of clogging effects during “Hansol” well recharge in the parts of western India is found to be encouraging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop several hardware and software simulation blocks for the TinyOS-2 (TOSSIM-T2) simulator. The choice of simulated hardware platform is the popular MICA2 mote. While the hardware simulation elements comprise of radio and external flash memory, the software blocks include an environment noise model, packet delivery model and an energy estimator block for the complete system. The hardware radio block uses the software environment noise model to sample the noise floor. The packet delivery model is built by establishing the SNR-PRR curve for the MICA2 system. The energy estimator block models energy consumption by Micro Controller Unit(MCU), Radio, LEDs, and external flash memory. Using the manufacturerpsilas data sheets we provide an estimate of the energy consumed by the hardware during transmission, reception and also track several of the MCUs states with the associated energy consumption. To study the effectiveness of this work, we take a case study of a paper presented in [1]. We obtain three sets of results for energy consumption through mathematical analysis, simulation using the blocks built into PowerTossim-T2 and finally laboratory measurements. Since there is a significant match between these result sets, we propose our blocks for T2 community to effectively test their application energy requirements and node life times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Hybrid approach introduced by the authors for at-site modeling of annual and periodic streamflows in earlier works is extended to simulate multi-site multi-season streamflows. It bears significance in integrated river basin planning studies. This hybrid model involves: (i) partial pre-whitening of standardized multi-season streamflows at each site using a parsimonious linear periodic model; (ii) contemporaneous resampling of the resulting residuals with an appropriate block size, using moving block bootstrap (non-parametric, NP) technique; and (iii) post-blackening the bootstrapped innovation series at each site, by adding the corresponding parametric model component for the site, to obtain generated streamflows at each of the sites. It gains significantly by effectively utilizing the merits of both parametric and NP models. It is able to reproduce various statistics, including the dependence relationships at both spatial and temporal levels without using any normalizing transformations and/or adjustment procedures. The potential of the hybrid model in reproducing a wide variety of statistics including the run characteristics, is demonstrated through an application for multi-site streamflow generation in the Upper Cauvery river basin, Southern India. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide a survey of some of our recent results ([9], [13], [4], [6], [7]) on the analytical performance modeling of IEEE 802.11 wireless local area networks (WLANs). We first present extensions of the decoupling approach of Bianchi ([1]) to the saturation analysis of IEEE 802.11e networks with multiple traffic classes. We have found that even when analysing WLANs with unsaturated nodes the following state dependent service model works well: when a certain set of nodes is nonempty, their channel attempt behaviour is obtained from the corresponding fixed point analysis of the saturated system. We will present our experiences in using this approximation to model multimedia traffic over an IEEE 802.11e network using the enhanced DCF channel access (EDCA) mechanism. We have found that we can model TCP controlled file transfers, VoIP packet telephony, and streaming video in the IEEE802.11e setting by this simple approximation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Processor architects have a challenging task of evaluating a large design space consisting of several interacting parameters and optimizations. In order to assist architects in making crucial design decisions, we build linear regression models that relate Processor performance to micro-architecture parameters, using simulation based experiments. We obtain good approximate models using an iterative process in which Akaike's information criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We used this procedure to establish the relationship of the CPI performance response to 26 key micro-architectural parameters using a detailed cycle-by-cycle superscalar processor simulator The resulting models provide a significance ordering on all micro-architectural parameters and their interactions, and explain the performance variations of micro-architectural techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CCEM method (Contact Criteria and Energy Minimisation) has been developed and applied to study protein-carbohydrate interactions. The method uses available X-ray data even on the native protein at low resolution (above 2.4 Å) to generate realistic models of a variety of proteins with various ligands.The two examples discussed in this paper are arabinose-binding protein (ABP) and pea lectin. The X-ray crystal structure data reported on ABP-β-l-arabinose complex at 2.8, 2.4 and 1.7 Å resolution differ drastically in predicting the nature of the interactions between the protein and ligand. It is shown that, using the data at 2.4 Å resolution, the CCEM method generates complexes which are as good as the higher (1.7 Å) resolution data. The CCEM method predicts some of the important hydrogen bonds between the ligand and the protein which are missing in the interpretation of the X-ray data at 2.4 Å resolution. The theoretically predicted hydrogen bonds are in good agreement with those reported at 1.7 Å resolution. Pea lectin has been solved only in the native form at 3 Å resolution. Application of the CCEM method also enables us to generate complexes of pea lectin with methyl-α-d-glucopyranoside and methyl-2,3-dimethyl-α-d-glucopyranoside which explain well the available experimental data in solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the objective of better understanding the significance of New Car Assessment Program (NCAP) tests conducted by the National Highway Traffic Safety Administration (NHTSA), head-on collisions between two identical cars of different sizes and between cars and a pickup truck are studied in the present paper using LS-DYNA models. Available finite element models of a compact car (Dodge Neon), midsize car (Dodge Intrepid), and pickup truck (Chevrolet C1500) are first improved and validated by comparing theanalysis-based vehicle deceleration pulses against corresponding NCAP crash test histories reported by NHTSA. In confirmation of prevalent perception, simulation-bascd results indicate that an NCAP test against a rigid barrier is a good representation of a collision between two similar cars approaching each other at a speed of 56.3 kmph (35 mph) both in terms of peak deceleration and intrusions. However, analyses carried out for collisions between two incompatible vehicles, such as an Intrepid or Neon against a C1500, point to the inability of the NCAP tests in representing the substantially higher intrusions in the front upper regions experienced by the cars, although peak decelerations in cars arc comparable to those observed in NCAP tests. In an attempt to improve the capability of a front NCAP test to better represent real-world crashes between incompatible vehicles, i.e., ones with contrasting ride height and lower body stiffness, two modified rigid barriers are studied. One of these barriers, which is of stepped geometry with a curved front face, leads to significantly improved correlation of intrusions in the upper regions of cars with respect to those yielded in the simulation of collisions between incompatible vehicles, together with the yielding of similar vehicle peak decelerations obtained in NCAP tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An attempt has been made here to study the sensitivity of the mean and the turbulence structure of the monsoon trough boundary layer to the choice of the constants in the dissipation equation for two stations Delhi and Calcutta, using one-dimensional atmospheric boundary layer model with e-epsilon turbulence closure. An analytical discussion of the problems associated with the constants of the dissipation equation is presented. It is shown here that the choice of the constants in the dissipation equation is quite crucial and the turbulence structure is very sensitive to these constants. The modification of the dissipation equation adopted by earlier studies, that is, approximating the Tke generation (due to shear and buoyancy production) in the epsilon-equation by max (shear production, shear + buoyancy production), can be avoided by a suitable choice of the constants suggested here. The observed turbulence structure is better simulated with these constants. The turbulence structure simulation with the constants recommended by Aupoix et al (1989) (which are interactive in time) for the monsoon region is shown to be qualitatively similar to the simulation obtained with the constants suggested here, thus implying that no universal constants exist to regulate dissipation rate. Simulations of the mean structure show little sensitivity to the type of the closure parameterization between e-l and e-epsilon closures. However the turbulence structure simulation with e-epsilon closure is far better compared to the e-l model simulations. The model simulations of temperature profiles compare quite well with the observations whenever the boundary layer is well mixed (neutral) or unstable. However the models are not able to simulate the nocturnal boundary layer (stable) temperature profiles. Moisture profiles are simulated reasonably better. With one-dimensional models, capturing observed wind variations is not up to the mark.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show by numerical simulations that discretized versions of commonly studied continuum nonlinear growth equations (such as the Kardar-Parisi-Zhangequation and the Lai-Das Sarma-Villain equation) and related atomistic models of epitaxial growth have a generic instability in which isolated pillars (or grooves) on an otherwise flat interface grow in time when their height (or depth) exceeds a critical value. Depending on the details of the model, the instability found in the discretized version may or may not be present in the truly continuum growth equation, indicating that the behavior of discretized nonlinear growth equations may be very different from that of their continuum counterparts. This instability can be controlled either by the introduction of higher-order nonlinear terms with appropriate coefficients or by restricting the growth of pillars (or grooves) by other means. A number of such ''controlled instability'' models are studied by simulation. For appropriate choice of the parameters used for controlling the instability, these models exhibit intermittent behavior, characterized by multiexponent scaling of height fluctuations, over the time interval during which the instability is active. The behavior found in this regime is very similar to the ''turbulent'' behavior observed in recent simulations of several one- and two-dimensional atomistic models of epitaxial growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-time scale stochastic approximation algorithm is proposed for simulation-based parametric optimization of hidden Markov models, as an alternative to the traditional approaches to ''infinitesimal perturbation analysis.'' Its convergence is analyzed, and a queueing example is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A systematic assessment of the submodels of conditional moment closure (CMC) formalism for the autoignition problem is carried out using direct numerical simulation (DNS) data. An initially non-premixed, n-heptane/air system, subjected to a three-dimensional, homogeneous, isotropic, and decaying turbulence, is considered. Two kinetic schemes, (1) a one-step and (2) a reduced four-step reaction mechanism, are considered for chemistry An alternative formulation is developed for closure of the mean chemical source term , based on the condition that the instantaneous fluctuation of excess temperature is small. With this model, it is shown that the CMC equations describe the autoignition process all the way up to near the equilibrium limit. The effect of second-order terms (namely, conditional variance of temperature excess sigma(2) and conditional correlations of species q(ij)) in modeling is examined. Comparison with DNS data shows that sigma(2) has little effect on the predicted conditional mean temperature evolution, if the average conditional scalar dissipation rate is properly modeled. Using DNS data, a correction factor is introduced in the modeling of nonlinear terms to include the effect of species fluctuations. Computations including such a correction factor show that the species conditional correlations q(ij) have little effect on model predictions with a one-step reaction, but those q(ij) involving intermediate species are found to be crucial when four-step reduced kinetics is considered. The "most reactive mixture fraction" is found to vary with time when a four-step kinetics is considered. First-order CMC results are found to be qualitatively wrong if the conditional mean scalar dissipation rate is not modeled properly. The autoignition delay time predicted by the CMC model compares excellently with DNS results and shows a trend similar to experimental data over a range of initial temperatures.