907 resultados para Network simulation
Resumo:
Nowadays, train control in-lab simulation tools play a crucial role in reducing extensive and expensive on-site railway testing activities. In this paper, we present our contribution in this arena by detailing the internals of our European Railway Train Management System in-lab demonstrator. This demonstrator is built over a general-purpose simulation framework, Riverbed Modeler, previously Opnet Modeler. Our framework models both ERTMS subsystems, the Automatic Train Protection application layer based on movement authority message exchange and the telecommunication subsystem based on GSM-R communication technology. We provide detailed information on our modelling strategy. We also validate our simulation framework with real trace data. To conclude, under current industry migration scenario from GSM-R legacy obsolescence to IP-based heterogeneous technologies, our simulation framework represents a singular tool to railway operators. As an example, we present the assessment of related performance indicators for a specific railway network using a candidate replacement technology, LTE, versus current legacy technology. To the best of our knowledge, there is no similar initiative able to measure the impact of the telecommunication subsystem in the railway network availability.
Resumo:
Iteration is unavoidable in the design process and should be incorporated when planning and managing projects in order to minimize surprises and reduce schedule distortions. However, planning and managing iteration is challenging because the relationships between its causes and effects are complex. Most approaches which use mathematical models to analyze the impact of iteration on the design process focus on a relatively small number of its causes and effects. Therefore, insights derived from these analytical models may not be robust under a broader consideration of potential influencing factors. In this article, we synthesize an explanatory framework which describes the network of causes and effects of iteration identified from the literature, and introduce an analytic approach which combines a task network modeling approach with System Dynamics simulation. Our approach models the network of causes and effects of iteration alongside the process architecture which is required to analyze the impact of iteration on design process performance. We show how this allows managers to assess the impact of changes to process architecture and to management levers which influence iterative behavior, accounting for the fact that these changes can occur simultaneously and can accumulate in non-linear ways. We also discuss how the insights resulting from this analysis can be visualized for easier consumption by project participants not familiar with simulation methods. Copyright © 2010 by ASME.
Resumo:
In this paper, a new thermal model based on the Fourier series solution of heat conduction equation has been introduced in detail. 1-D and 2-D Fourier series thermal models have been programmed in MATLAB/Simulink. Compared with the traditional finite-difference thermal model and equivalent RC thermal network, the new thermal model can provide high simulation speed with high accuracy, which has been proved to be more favorable in dynamic thermal characterization on power semiconductor switches. The complete electrothermal simulation models of insulated gate bipolar transistor (IGBT) and power diodes under inductive load switching condition have been successfully implemented in MATLAB/Simulink. The experimental results on IGBT and power diodes with clamped inductive load switching tests have verified the new electrothermal simulation model. The advantage of Fourier series thermal model over widely used equivalent RC thermal network in dynamic thermal characterization has also been validated by the measured junction temperature.© 2010 IEEE.
Resumo:
This paper describes the ground target detection, classification and sensor fusion problems in distributed fiber seismic sensor network. Compared with conventional piezoelectric seismic sensor used in UGS, fiber optic sensor has advantages of high sensitivity and resistance to electromagnetic disturbance. We have developed a fiber seismic sensor network for target detection and classification. However, ground target recognition based on seismic sensor is a very challenging problem because of the non-stationary characteristic of seismic signal and complicated real life application environment. To solve these difficulties, we study robust feature extraction and classification algorithms adapted to fiber sensor network. An united multi-feature (UMF) method is used. An adaptive threshold detection algorithm is proposed to minimize the false alarm rate. Three kinds of targets comprise personnel, wheeled vehicle and tracked vehicle are concerned in the system. The classification simulation result shows that the SVM classifier outperforms the GMM and BPNN. The sensor fusion method based on D-S evidence theory is discussed to fully utilize information of fiber sensor array and improve overall performance of the system. A field experiment is organized to test the performance of fiber sensor network and gather real signal of targets for classification testing.
Resumo:
In this paper, we revisit the issue of the public goods game (PGG) on a heterogeneous graph. By introducing a new effective topology parameter, 'degree grads' phi, we clearly classify the agents into three kinds, namely, C-0, C-1, and D. The mechanism for the heterogeneous topology promoting cooperation is discussed in detail from the perspective of C0C1D, which reflects the fact that the unreasoning imitation behaviour of C-1 agents, who are 'cheated' by the well-paid C-0 agents inhabiting special positions, stabilizes the formation of the cooperation community. The analytical and simulation results for certain parameters are found to coincide well with each other. The C0C1D case provides a picture of the actual behaviours in real society and thus is potentially of interest.
Resumo:
Physical gelation in the concentrated Pluronic F127/D2O solution has been studied by a combination of small-angle neutron scattering (SANS) and Monte Carlo simulation. A 15% F127/D2O solution exhibits a sol-gel transition at low temperature and a gel-sol transition at the higher temperature, as evidenced by SANS and Monte Carlo simulation studies. Our SANS and simulation results also suggest that the sol-gel transition is dominated by the formation of a percolated polymer network, while the gel-sol transition is determined by the loss of bound solvent. Furthermore, different diffusion behaviors of different bound solvents and free solvent are observed. We expect that this approach can be further extended to study phase behaviors of other systems with similar sol-gel phase diagrams.
Resumo:
The graft of maleic anhydride (MAH) onto isotactic polypropylene (iPP) initiated by dicumyl peroxide (DCP) at 190 degreesC was studied by means of the Monte Carlo method. The ceiling temperature theory, i.e., no possibility for the homopolymerization of MA-H to occur at higher temperatures, was used in this study. The simulation results show that most MAH monomers were grafted onto the radical chain ends arising from beta scission at a lower MAH concentration, whereas the amount of MAH monomers attached to the tertiary carbons was much larger than that grafted onto the radical chain ends at a higher MAH concentration for various DCP concentrations. This conclusion gives a good interpretation for the disagreement on the grafting sites along a PP chain. Moreover, it was found that the grafting degree increased considerably up to a peak value; thereafter, it decreased continuously with increasing MA-H concentration. The peak shifted in the lower MAH concentration direction and became lower and lower with increasing DCP concentration. When the DCP concentration was below 0.1 wt %, the peak was hardly observed. Those results are in good agreement with the experiments.
Resumo:
The speciation and distribution of Gd(III) in human interstitial fluid was studied by computer simulation. Meantime artificial neural network was applied to the estimation of log beta values of complexes. The results show that the precipitate species, GdPO4 and Gd-2(CO3)(3), are the predominant species. Among soluble species, the free Gd(III), [Gd(HSA)], [Gd(Ox)] and then the ternary complexes of Gd(III) with citrate arc main species and [Gd-3(OH)(4)] becomes the predominant species at the Gd(III) total concentration or 2.2x10(-2)mol/L.
Resumo:
Artificial neural network (ANN) and multiple linear regression (MLR) were used for the simulation of C-13 NMR chemical shifts of 118 central carbon atoms in 18 pyridines and quinolines. The electronic and geometric features were calculated to describe the environments of the central carbon atom. The results provided by ANN method were better than that achieved by MLR.
Resumo:
The MOS transistor physical model as described in [3] is presented here as a network model. The goal is to obtain an accurate model, suitable for simulation, free from certain problems reported in the literature [13], and conceptually as simple as possible. To achieve this goal the original model had to be extended and modified. The paper presents the derivation of the network model from physical equations, including the corrections which are required for simulation and which compensate for simplifications introduced in the original physical model. Our intrinsic MOS model consists of three nonlinear voltage-controlled capacitors and a dependent current source. The charges of the capacitors and the current of the current source are functions of the voltages $V_{gs}$, $V_{bs}$, and $V_{ds}$. The complete model consists of the intrinsic model plus the parasitics. The apparent simplicity of the model is a result of hiding information in the characteristics of the nonlinear components. The resulted network model has been checked by simulation and analysis. It is shown that the network model is suitable for simulation: It is defined for any value of the voltages; the functions involved are continuous and satisfy Lipschitz conditions with no jumps at region boundaries; Derivatives have been computed symbolically and are available for use by the Newton-Raphson method. The model"s functions can be measured from the terminals. It is also shown that small channel effects can be included in the model. Higher frequency effects can be modeled by using a network consisting of several sections of the basic lumped model. Future plans include a detailed comparison of the network model with models such as SPICE level 3 and a comparison of the multi- section higher frequency model with experiments.
Resumo:
We present methods of calculating the value of two performance parameters for multipath, multistage interconnection networks: the normalized throughput and the probability of successful message transmission. We develop a set of exact equations for the loading probability mass functions of network channels and a program for solving them exactly. We also develop a Monte Carlo method for approxmiate solution of the equations, and show that the resulting approximation method will always calculate the values of the performance parameters more quickly than direct simulation.
Resumo:
The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.
Resumo:
The development and deployment of distributed network-aware applications and services over the Internet require the ability to compile and maintain a model of the underlying network resources with respect to (one or more) characteristic properties of interest. To be manageable, such models must be compact, and must enable a representation of properties along temporal, spatial, and measurement resolution dimensions. In this paper, we propose a general framework for the construction of such metric-induced models using end-to-end measurements. We instantiate our approach using one such property, packet loss rates, and present an analytical framework for the characterization of Internet loss topologies. From the perspective of a server the loss topology is a logical tree rooted at the server with clients at its leaves, in which edges represent lossy paths between a pair of internal network nodes. We show how end-to-end unicast packet probing techniques could b e used to (1) infer a loss topology and (2) identify the loss rates of links in an existing loss topology. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. We report on simulation, implementation, and Internet deployment results that show the effectiveness of our approach and its robustness in terms of its accuracy and convergence over a wide range of network conditions.
Resumo:
Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When a number of such connections share a common endpoint, that endpoint has the opportunity to correlate these end-to-end measurements to better diagnose and control the use of shared resources. A valuable characterization of such shared resources is the "loss topology". From the perspective of a server with concurrent connections to multiple clients, the loss topology is a logical tree rooted at the server in which edges represent lossy paths between a pair of internal network nodes. We develop an end-to-end unicast packet probing technique and an associated analytical framework to: (1) infer loss topologies, (2) identify loss rates of links in an existing loss topology, and (3) augment a topology to incorporate the arrival of a new connection. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that our approach is robust in terms of its accuracy and convergence over a wide range of network conditions.
Resumo:
This paper introduces ART-EMAP, a neural architecture that uses spatial and temporal evidence accumulation to extend the capabilities of fuzzy ARTMAP. ART-EMAP combines supervised and unsupervised learning and a medium-term memory process to accomplish stable pattern category recognition in a noisy input environment. The ART-EMAP system features (i) distributed pattern registration at a view category field; (ii) a decision criterion for mapping between view and object categories which can delay categorization of ambiguous objects and trigger an evidence accumulation process when faced with a low confidence prediction; (iii) a process that accumulates evidence at a medium-term memory (MTM) field; and (iv) an unsupervised learning algorithm to fine-tune performance after a limited initial period of supervised network training. ART-EMAP dynamics are illustrated with a benchmark simulation example. Applications include 3-D object recognition from a series of ambiguous 2-D views.