788 resultados para agent-based simulation
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
Syndromic surveillance (SyS) systems currently exploit various sources of health-related data, most of which are collected for purposes other than surveillance (e.g. economic). Several European SyS systems use data collected during meat inspection for syndromic surveillance of animal health, as some diseases may be more easily detected post-mortem than at their point of origin or during the ante-mortem inspection upon arrival at the slaughterhouse. In this paper we use simulation to evaluate the performance of a quasi-Poisson regression (also known as an improved Farrington) algorithm for the detection of disease outbreaks during post-mortem inspection of slaughtered animals. When parameterizing the algorithm based on the retrospective analyses of 6 years of historic data, the probability of detection was satisfactory for large (range 83-445 cases) outbreaks but poor for small (range 20-177 cases) outbreaks. Varying the amount of historical data used to fit the algorithm can help increasing the probability of detection for small outbreaks. However, while the use of a 0·975 quantile generated a low false-positive rate, in most cases, more than 50% of outbreak cases had already occurred at the time of detection. High variance observed in the whole carcass condemnations time-series, and lack of flexibility in terms of the temporal distribution of simulated outbreaks resulting from low reporting frequency (monthly), constitute major challenges for early detection of outbreaks in the livestock population based on meat inspection data. Reporting frequency should be increased in the future to improve timeliness of the SyS system while increased sensitivity may be achieved by integrating meat inspection data into a multivariate system simultaneously evaluating multiple sources of data on livestock health.
Resumo:
An effective solution to model and apply planning domain knowledge for deliberation and action in probabilistic, agent-oriented control is presented. Specifically, the addition of a task structure planning component and supporting components to an agent-oriented architecture and agent implementation is described. For agent control in risky or uncertain environments, an approach and method of goal reduction to task plan sets and schedules of action is presented. Additionally, some issues related to component-wise, situation-dependent control of a task planning agent that schedules its tasks separately from planning them are motivated and discussed.
Resumo:
State-of-the-art process-based models have shown to be applicable to the simulation and prediction of coastal morphodynamics. On annual to decadal temporal scales, these models may show limitations in reproducing complex natural morphological evolution patterns, such as the movement of bars and tidal channels, e.g. the observed decadal migration of the Medem Channel in the Elbe Estuary, German Bight. Here a morphodynamic model is shown to simulate the hydrodynamics and sediment budgets of the domain to some extent, but fails to adequately reproduce the pronounced channel migration, due to the insufficient implementation of bank erosion processes. In order to allow for long-term simulations of the domain, a nudging method has been introduced to update the model-predicted bathymetries with observations. The model-predicted bathymetry is nudged towards true states in annual time steps. Sensitivity analysis of a user-defined correlation length scale, for the definition of the background error covariance matrix during the nudging procedure, suggests that the optimal error correlation length is similar to the grid cell size, here 80-90 m. Additionally, spatially heterogeneous correlation lengths produce more realistic channel depths than do spatially homogeneous correlation lengths. Consecutive application of the nudging method compensates for the (stand-alone) model prediction errors and corrects the channel migration pattern, with a Brier skill score of 0.78. The proposed nudging method in this study serves as an analytical approach to update model predictions towards a predefined 'true' state for the spatiotemporal interpolation of incomplete morphological data in long-term simulations.
Resumo:
To prepare an answer to the question of how a developing country can attract FDI, this paper explored the factors and policies that may help bring FDI into a developing country by utilizing an extended version of the knowledge-capital model. With a special focus on the effects of FTAs/EPAs between market countries and developing countries, simulations with the model revealed the following: (1) Although FTA/EPA generally ends to increase FDI to a developing country, the possibility of improving welfare through increased demand for skilled and unskilled labor becomes higher as the size of the country declines; (2) Because the additional implementation of cost-saving policies to reduce firm-type/trade-link specific fixed costs ends to depreciate the price of skilled labor by saving its input, a developing country, which is extremely scarce in skilled labor, is better off avoiding the additional option; (3) If a country hopes to enjoy larger welfare gains with EPA, efforts to increase skilled labor in the country, such as investing in education, may be beneficial.
Resumo:
This paper explores the potential usefulness of an AGE model with the Melitz-type trade specification to assess economic effects of technical regulations, taking the case of the EU ELV/RoHS directives as an example. Simulation experiments reveal that: (1) raising the fixed exporting cost to make sales in the EU market brings results that exports of the targeted commodities (motor vehicles and parts for ELV and electronic equipment for RoHS) to the EU from outside regions/countries expand while the domestic trade in the EU shrinks when the importer's preference for variety (PfV) is not strong; (2) if the PfV is not strong, policy changes that may bring reduction in the number of firms enable survived producers with high productivity to expand production to be large-scale mass producers fully enjoying the fruit of economies of scale; and (3) When the strength of the importer's PfV is changed from zero to unity, there is the value that totally changes simulation results and their interpretations.
Resumo:
Recently, we have presented some studies concerning the analysis, design and optimization of one experimental device developed in the UK - GPTAD - which has been designed to remove blood clots without the need to make contact with the clot itself, thereby potentially reducing the risk of problems such as downstream embolisation. Based on the idea of a modification of the previous device, in this work, we present a model based in the use of stents like the SolitaireTM FR, which is in contact with the clot itself. In the case of such devices, the stent is self-expandable and the extraction of the blood clot is faciliatated by the stent, which must be inside the clot. Such stents are generally inserted in position by using the guidewire inserted into the catheter. This type of modeling could potentially be useful in showing how the blood clot is moved by the various different forces involved. The modelling has been undertaken by analyzing the resistances, compliances and inertances effects. We model an artery and blood clot for range of forces for the guidewire. In each case we determine the interaction between blood clot, stent and artery.
Resumo:
The simulation of interest rate derivatives is a powerful tool to face the current market fluctuations. However, the complexity of the financial models and the way they are processed require exorbitant computation times, what is in clear conflict with the need of a processing time as short as possible to operate in the financial market. To shorten the computation time of financial derivatives the use of hardware accelerators becomes a must.
Resumo:
In this paper, a novel method to simulate radio propagation is presented. The method consists of two steps: automatic 3D scenario reconstruction and propagation modeling. For 3D reconstruction, a machine learning algorithm is adopted and improved to automatically recognize objects in pictures taken from target regions, and 3D models are generated based on the recognized objects. The propagation model employs a ray tracing algorithm to compute signal strength for each point on the constructed 3D map. Our proposition reduces, or even eliminates, infrastructure cost and human efforts during the construction of realistic 3D scenes used in radio propagation modeling. In addition, the results obtained from our propagation model proves to be both accurate and efficient
Resumo:
In this paper, a novel method to simulate radio propagation is presented. The method consists of two steps: automatic 3D scenario reconstruction and propagation modeling. For 3D reconstruction, a machine learning algorithm is adopted and improved to automatically recognize objects in pictures taken from target region, and 3D models are generated based on the recognized objects. The propagation model employs a ray tracing algorithm to compute signal strength for each point on the constructed 3D map. By comparing with other methods, the work presented in this paper makes contributions on reducing human efforts and cost in constructing 3D scene; moreover, the developed propagation model proves its potential in both accuracy and efficiency.
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Resumo:
In this paper, we introduce B2DI model that extends BDI model to perform Bayesian inference under uncertainty. For scalability and flexibility purposes, Multiply Sectioned Bayesian Network (MSBN) technology has been selected and adapted to BDI agent reasoning. A belief update mechanism has been defined for agents, whose belief models are connected by public shared beliefs, and the certainty of these beliefs is updated based on MSBN. The classical BDI agent architecture has been extended in order to manage uncertainty using Bayesian reasoning. The resulting extended model, so-called B2DI, proposes a new control loop. The proposed B2DI model has been evaluated in a network fault diagnosis scenario. The evaluation has compared this model with two previously developed agent models. The evaluation has been carried out with a real testbed diagnosis scenario using JADEX. As a result, the proposed model exhibits significant improvements in the cost and time required to carry out a reliable diagnosis.
Resumo:
Connectivity analysis on diffusion MRI data of the whole-brain suffers from distortions caused by the standard echo-planar imaging acquisition strategies. These images show characteristic geometrical deformations and signal destruction that are an important drawback limiting the success of tractography algorithms. Several retrospective correction techniques are readily available. In this work, we use a digital phantom designed for the evaluation of connectivity pipelines. We subject the phantom to a “theoretically correct” and plausible deformation that resembles the artifact under investigation. We correct data back, with three standard methodologies (namely fieldmap-based, reversed encoding-based, and registration- based). Finally, we rank the methods based on their geometrical accuracy, the dropout compensation, and their impact on the resulting connectivity matrices.
Resumo:
Evaluation of three solar and daylighting control systems based on Calumen II, Ecotect and Radiance simulation programs to obtain an energy efficient and healthy interior in the experimental building prototype SDE10
Resumo:
Cooperative systems are suitable for many types of applications and nowadays these system are vastly used to improve a previously defined system or to coordinate multiple devices working together. This paper provides an alternative to improve the reliability of a previous intelligent identification system. The proposed approach implements a cooperative model based on multi-agent architecture. This new system is composed of several radar-based systems which identify a detected object and transmit its own partial result by implementing several agents and by using a wireless network to transfer data. The proposed topology is a centralized architecture where the coordinator device is in charge of providing the final identification result depending on the group behavior. In order to find the final outcome, three different mechanisms are introduced. The simplest one is based on majority voting whereas the others use two different weighting voting procedures, both providing the system with learning capabilities. Using an appropriate network configuration, the success rate can be improved from the initial 80% up to more than 90%.