974 resultados para Simulation Environments
Resumo:
Artificial neural network (ANN) and multiple linear regression (MLR) were used for the simulation of C-13 NMR chemical shifts of 118 central carbon atoms in 18 pyridines and quinolines. The electronic and geometric features were calculated to describe the environments of the central carbon atom. The results provided by ANN method were better than that achieved by MLR.
Resumo:
Three kinds of steels were studied using electrically connected hanging specimen in the corrosion simulation device and offshore long scale hanging specimen. The experimental results obtained by the two methods show that the device can better reflect the offshore corrosion environment. A Ni-Cu-P steel specimen was studied through analysis of the specimen's corrosion products and corrosion types. The surface of the samples before and after the removal of the rust layer produced by these two methods were observed and compared after some experiments. The microstructure of the corrosion products under different marine environments were analyzed and compared through IR. It indicated good correlation between the electrically connected hanging specimen method and the long scale hanging specimen method.
Resumo:
A corrosion simulation device was studied using offshore long scale hanging specimens. An Ni-Cu-P steel specimen was studied by analysing its corrosion products and corrosion types. The appearance of the samples and the surface of the metallic substrate after the removal of the rust layer produced by these two methods were observed and compared after 470 days of exposure. The phase structure of the corrosion products under different marine environments were analysed and compared. It further indicated good correlation between the electrically connected hanging specimen method and the long scale hanging specimen method.
Resumo:
A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.
Resumo:
© 2015 IEEE.In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.
Resumo:
When designing a new passenger ship or modifying an existing design, how do we ensure that the proposed design and crew emergency procedures are safe from an evacuation resulting from fire or other incident? In the wake of major maritime disasters such as the Scandinavian Star, Herald of Free Enterprise, Estonia and in light of the growth in the numbers of high density high-speed ferries and large capacity cruise ships, issues concerning the evacuation of passengers and crew at sea are receiving renewed interest. Fire and evacuation models with features such as the ability to realistically simulate the spread of fire and fire suppression systems and the human response to fire sas well as the capability to model human performance in heeled orientations linked to a virtual reality environment that produces realistic visualisations of modelled scenarios are now available and can be used to aid the engineer in assessing ship design and procedures. This paper describes the maritmeEXODUS ship evacuation and the SMARTFIRE fire simulation model and provides an example application demonstrating the use of the models in performing fire and evacuation analysis for a large passenger ship partially based on the requirements of MSC circular 1033. The fire simulations include the action of a water mist system.
Resumo:
This study investigates the use of computer modelled versus directly experimentally determined fire hazard data for assessing survivability within buildings using evacuation models incorporating Fractionally Effective Dose (FED) models. The objective is to establish a link between effluent toxicity, measured using a variety of small and large scale tests, and building evacuation. For the scenarios under consideration, fire simulation is typically used to determine the time non-survivable conditions develop within the enclosure, for example, when smoke or toxic effluent falls below a critical height which is deemed detrimental to evacuation or when the radiative fluxes reach a critical value leading to the onset of flashover. The evacuation calculation would the be used to determine whether people within the structure could evacuate before these critical conditions develop.
Resumo:
Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation sDEd approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.
Resumo:
This paper presents a systematic measurement campaign of diversity reception techniques for use in multiple-antenna wearable systems operating at 868 MHz. The experiments were performed using six time-synchronized bodyworn receivers and considered mobile off-body communications in an anechoic chamber, open office area and a hallway. The cross-correlation coefficient between the signal fading measured by bodyworn receivers was dependent upon the local environment and typically below 0.7. All received signal envelopes were combined in post-processing to study the potential benefits of implementing receiver diversity based upon selection combination, equal-gain and maximal-ratio combining. It is shown that, in an open office area, the 5.7 dB diversity gain obtained using a dual-branch bodyworn maximal-ratio diversity system may be further improved to 11.1 dB if a six-branch system was used. First-and second-order theoretical equations for diversity reception techniques operating in Nakagami fading conditions were used to model the postdetection combined envelopes. Maximum likelihood estimates of the Nakagami-parameter suggest that the fading conditions encountered in this study were generally less severe than Rayleigh. The paper also describes an algorithm that may be used to simulate the measured output of an M-branch diversity combiner operating in independent and identically-distributed Nakagami fading environments.
Resumo:
Haptic information originates from a different human sense (touch), therefore the quality of service (QoS) required to supporthaptic traffic is significantly different from that used to support conventional real-time traffic such as voice or video. Each type ofnetwork impairment has different (and severe) impacts on the user’s haptic experience. There has been no specific provision of QoSparameters for haptic interaction. Previous research into distributed haptic virtual environments (DHVEs) have concentrated onsynchronization of positions (haptic device or virtual objects), and are based on client-server architectures.We present a new peerto-peer DHVE architecture that further extends this to enable force interactions between two users whereby force data are sent tothe remote peer in addition to positional information. The work presented involves both simulation and practical experimentationwhere multimodal data is transmitted over a QoS-enabled IP network. Both forms of experiment produce consistent results whichshow that the use of specific QoS classes for haptic traffic will reduce network delay and jitter, leading to improvements in users’haptic experiences with these types of applications.
Resumo:
This paper presents a practical algorithm for the simulation of interactive deformation in a 3D polygonal mesh model. The algorithm combines the conventional simulation of deformation using a spring-mass-damping model, solved by explicit numerical integration, with a set of heuristics to describe certain features of the transient behaviour, to increase the speed and stability of solution. In particular, this algorithm was designed to be used in the simulation of synthetic environments where it is necessary to model realistically, in real time, the effect on non-rigid surfaces being touched, pushed, pulled or squashed. Such objects can be solid or hollow, and have plastic, elastic or fabric-like properties. The algorithm is presented in an integrated form including collision detection and adaptive refinement so that it may be used in a self-contained way as part of a simulation loop to include human interface devices that capture data and render a realistic stereoscopic image in real time. The algorithm is designed to be used with polygonal mesh models representing complex topology, such as the human anatomy in a virtual-surgery training simulator. The paper evaluates the model behaviour qualitatively and then concludes with some examples of the use of the algorithm.
Resumo:
We propose simple models to predict the performance degradation of disk requests due to storage device contention in consolidated virtualized environments. Model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same server. We first propose a trace-driven approach that evaluates a queueing network with fair share scheduling using simulation. The model parameters consider Virtual Machine Monitor level disk access optimizations and rely on a calibration technique. We further present a measurement-based approach that allows a distinct characterization of read/write performance attributes. In particular, we define simple linear prediction models for I/O request mean response times, throughputs and read/write mixes, as well as a simulation model for predicting response time distributions. We found our models to be effective in predicting such quantities across a range of synthetic and emulated application workloads.
Resumo:
We propose a trace-driven approach to predict the performance degradation of disk request response times due to storage device contention in consolidated virtualized environments. Our performance model evaluates a queueing network with fair share scheduling using trace-driven simulation. The model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same virtualized server. The model parameter estimation relies on a search technique that tries to estimate the splitting and merging of blocks at the the Virtual Machine Monitor (VMM) level in the case of multiple competing VMs. Simulation experiments based on traces of the Postmark and FFSB disk benchmarks show that our model is able to accurately predict the impact of workload consolidation on VM disk IO response times.
Resumo:
The expansion of an initially unmagnetized planar rarefaction wave has recently been shown to trigger a thermal anisotropy-driven Weibel instability (TAWI), which can generate magnetic fields from noise levels. It is examined here whether the TAWI can also grow in a curved rarefaction wave. The expansion of an initially unmagnetized circular plasma cloud, which consists of protons and hot electrons, into a vacuum is modelled for this purpose with a two-dimensional particle-in-cell (PIC) simulation. It is shown that the momentum transfer from the electrons to the radially accelerating protons can indeed trigger a TAWI. Radial current channels form and the aperiodic growth of a magnetowave is observed, which has a magnetic field that is oriented orthogonal to the simulation plane. The induced electric field implies that the electron density gradient is no longer parallel to the electric field. Evidence is presented here that this electric field modification triggers a environments, which are needed to explain the electromagnetic emissions by astrophysical jets. It is outlined how this instability could be examined experimentally.second magnetic instability, which results in a rotational low-frequency magnetowave. The relevance of the TAWI is discussed for the growth of small-scale magnetic fields in astrophysical
Resumo:
A two-thermocouple sensor characterization method for use in variable flow applications is proposed. Previous offline methods for constant velocity flow are extended using sliding data windows and polynomials to accommodate variable velocity. Analysis of Monte-Carlo simulation studies confirms that the unbiased and consistent parameter estimator outperforms alternatives in the literature and has the added advantage of not requiring a priori knowledge of the time constant ratio of thermocouples. Experimental results from a test rig are also presented. © 2008 The Institute of Measurement and Control.