967 resultados para Distributed Simulation
Resumo:
The kaolinite (Kaol) intercalated with potassium acetate (Ac) was prepared and characterized by X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and thermogravimetry. Molecular dynamic simulation was performed to investigate the structure of Kaol–Ac intercalation complex and the hydrogen bonds between Kaol and intercalated Ac andwater using INTERFACE forcefield. The acetate anions andwater arranged in a bilayer structure in the interlayer space of Kaol. The potassium cations distributed in the interlayer space and strongly coordinated with acetate anions aswell aswater rather than keyed into the ditrigonal holes of tetrahedral surface of Kaol. Strong hydrogen bonds formed between the hydrogen atoms of hydroxyl on the octahedral surface and oxygen atoms of both acetate anions and water. The acetate anions andwater also weakly bonded hydrogen to the silica tetrahedral surface through their hydrogen atoms with the oxygen atoms of silica tetrahedral surface.
Resumo:
Purpose Traditional construction planning relies upon the critical path method (CPM) and bar charts. Both of these methods suffer from visualization and timing issues that could be addressed by 4D technology specifically geared to meet the needs of the construction industry. This paper proposed a new construction planning approach based on simulation by using a game engine. Design/methodology/approach A 4D automatic simulation tool was developed and a case study was carried out. The proposed tool was used to simulate and optimize the plans for the installation of a temporary platform for piling in a civil construction project in Hong Kong. The tool simulated the result of the construction process with three variables: 1) equipment, 2) site layout and 3) schedule. Through this, the construction team was able to repeatedly simulate a range of options. Findings The results indicate that the proposed approach can provide a user-friendly 4D simulation platform for the construction industry. The simulation can also identify the solution being sought by the construction team. The paper also identifies directions for further development of the 4D technology as an aid in construction planning and decision-making. Research limitations/implications The tests on the tool are limited to a single case study and further research is needed to test the use of game engines for construction planning in different construction projects to verify its effectiveness. Future research could also explore the use of alternative game engines and compare their performance and results. Originality/value The authors proposed the use of game engine to simulate the construction process based on resources, working space and construction schedule. The developed tool can be used by end-users without simulation experience.
Resumo:
Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations.
Resumo:
For point to point multiple input multiple output systems, Dayal-Brehler-Varanasi have proved that training codes achieve the same diversity order as that of the underlying coherent space time block code (STBC) if a simple minimum mean squared error estimate of the channel formed using the training part is employed for coherent detection of the underlying STBC. In this letter, a similar strategy involving a combination of training, channel estimation and detection in conjunction with existing coherent distributed STBCs is proposed for noncoherent communication in Amplify-and-Forward (AF) relay networks. Simulation results show that the proposed simple strategy outperforms distributed differential space-time coding for AF relay networks. Finally, the proposed strategy is extended to asynchronous relay networks using orthogonal frequency division multiplexing.
Resumo:
We propose a dynamic mathematical model of tissue oxygen transport by a preexisting three-dimensional microvascular network which provides nutrients for an in situ cancer at the very early stage of primary microtumour growth. The expanding tumour consumes oxygen during its invasion to the surrounding tissues and cooption of host vessels. The preexisting vessel cooption, remodelling and collapse are modelled by the changes of haemodynamic conditions due to the growing tumour. A detailed computational model of oxygen transport in tumour tissue is developed by considering (a) the time-varying oxygen advection diffusion equation within the microvessel segments, (b) the oxygen flux across the vessel walls, and (c) the oxygen diffusion and consumption with in the tumour and surrounding healthy tissue. The results show the oxygen concentration distribution at different time points of early tumour growth. In addition, the influence of preexisting vessel density on the oxygen transport has been discussed. The proposed model not only provides a quantitative approach for investigating the interactions between tumour growth and oxygen delivery, but also is extendable to model other molecules or chemotherapeutic drug transport in the future study.
Resumo:
Background: Coronary tortuosity (CT) is a common coronary angiographic finding. Whether CT leads to an apparent reduction in coronary pressure distal to the tortuous segment of the coronary artery is still unknown. The purpose of this study is to determine the impact of CT on coronary pressure distribution by numerical simulation. Methods: 21 idealized models were created to investigate the influence of coronary tortuosity angle (CTA) and coronary tortuosity number (CTN) on coronary pressure distribution. A 2D incompressible Newtonian flow was assumed and the computational simulation was performed using finite volume method. CTA of 30°, 60°, 90°, 120° and CTN of 0, 1, 2, 3, 4, 5 were discussed under both steady and pulsatile conditions, and the changes of outlet pressure and inlet velocity during the cardiac cycle were considered. Results: Coronary pressure distribution was affected both by CTA and CTN. We found that the pressure drop between the start and the end of the CT segment decreased with CTA, and the length of the CT segment also declined with CTA. An increase in CTN resulted in an increase in the pressure drop. Conclusions: Compared to no-CT, CT can results in more decrease of coronary blood pressure in dependence on the severity of tortuosity and severe CT may cause myocardial ischemia.
Resumo:
Concurrency control (CC) algorithms are important in distributed database systems to ensure consistency of the database. A number of such algorithms are available in the literature. The issue of performance evaluation of these algorithms has been recognized to be important. However, only a few studies have been carried out towards this. This paper deals with the performance evaluation of a CC algorithm proposed by Rosenkrantz et al. through a detailed simulation study. In doing so, the algorithm has been modified so that it can, within itself, take care of the redundancy in the database. The influences of various system parameters and the transaction profile on the response time and on the degree of conflict are considered. The entire study has been carried out using the programming language SIMULA on a DEC-1090 system.
Resumo:
The vision sense of standalone robots is limited by line of sight and onboard camera capabilities, but processing video from remote cameras puts a high computational burden on robots. This paper describes the Distributed Robotic Vision Service, DRVS, which implements an on-demand distributed visual object detection service. Robots specify visual information requirements in terms of regions of interest and object detection algorithms. DRVS dynamically distributes the object detection computation to remote vision systems with processing capabilities, and the robots receive high-level object detection information. DRVS relieves robots of managing sensor discovery and reduces data transmission compared to image sharing models of distributed vision. Navigating a sensorless robot from remote vision systems is demonstrated in simulation as a proof of concept.
Resumo:
The stimulation technique has gained much importance in the performance studies of Concurrency Control (CC) algorithms for distributed database systems. However, details regarding the simulation methodology and implementation are seldom mentioned in the literature. One objective of this paper is to elaborate the simulation methodology using SIMULA. Detailed studies have been carried out on a centralised CC algorithm and its modified version. The results compare well with a previously reported study on these algorithms. Here, additional results concerning the update intensiveness of transactions and the degree of conflict are obtained. The degree of conflict is quantitatively measured and it is seen to be a useful performance index. Regression analysis has been carried out on the results, and an optimisation study using the regression model has been performed to minimise the response time. Such a study may prove useful for the design of distributed database systems.
Resumo:
Parthenium weed (Parthenium hysterophorus L.) is an erect, branched, annual plant of the family Asteraceae. It is native to the tropical Americas, while now widely distributed throughout Africa, Asia, Oceania, and Australasia. Due to its allelopathic and toxic characteristics, parthenium weed has been considered to be a weed of global significance. These effects occur across agriculture (crops and pastures), within natural ecosystems, and has impacts upon health (human and animals). Although integrated weed management (IWM) for parthenium weed has had some success, due to its tolerance and good adaptability to temperature, precipitation, and CO2, this weed has been predicted to become more vigorous under a changing climate resulting in an altered canopy architecture. From the viewpoint of IWM, the altered canopy architecture may be associated with not only improved competitive ability and replacement but also may alter the effectiveness of biocontrol agents and other management strategies. This paper reports on a preliminary study on parthenium weed canopy architecture at three temperature regimes (day/night 22/15 °C, 27/20 °C, and 32/25 °C in thermal time 12/12 hours) and establishes a threedimensional (3D) canopy model using Lindenmayer-systems (L-systems). This experiment was conducted in a series of controlled environment rooms with parthenium weed plants being grown in a heavy clay soil. A sonic digitizer system was used to record the morphology, topology, and geometry of the plants for model construction. The main findings include the determination of the phyllochron which enables the prediction of parthenium weed growth under different temperature regimes and that increased temperature enhances growth and enlarges the plants canopy size and structure. The developed 3D canopy model provides a tool to simulate and predict the weed growth in response to temperature, and can be adjusted for studies of other climatic variables such as precipitation and CO2. Further studies are planned to investigate the effects of other climatic variables, and the predicted changes in the pathogenic biocontrol agent effectiveness.
Resumo:
Flexible objects such as a rope or snake move in a way such that their axial length remains almost constant. To simulate the motion of such an object, one strategy is to discretize the object into large number of small rigid links connected by joints. However, the resulting discretised system is highly redundant and the joint rotations for a desired Cartesian motion of any point on the object cannot be solved uniquely. In this paper, we revisit an algorithm, based on the classical tractrix curve, to resolve the redundancy in such hyper-redundant systems. For a desired motion of the `head' of a link, the `tail' is moved along a tractrix, and recursively all links of the discretised objects are moved along different tractrix curves. The algorithm is illustrated by simulations of a moving snake, tying of knots with a rope and a solution of the inverse kinematics of a planar hyper-redundant manipulator. The simulations show that the tractrix based algorithm leads to a more `natural' motion since the motion is distributed uniformly along the entire object with the displacements diminishing from the `head' to the `tail'.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we propose a new fault-tolerant distributed deadlock detection algorithm which can handle loss of any resource release message. It is based on a token-based distributed mutual exclusion algorithm. We have evaluated and compared the performance of the proposed algorithm with two other algorithms which belong to two different classes, using simulation studies. The proposed algorithm is found to be efficient in terms of average number of messages per wait and average deadlock duration compared to the other two algorithms in all situations, and has comparable or better performance in terms of other parameters.