42 resultados para Individual-based modeling
Resumo:
Polycondensation of 2,6-dihydroxynaphthalene with 4,4'-bis(4"-fluorobenzoyl)biphenyl affords a novel, semicrystalline poly(ether ketone) with a melting point of 406 degreesC and glass transition temperature (onset) of 168 degreesC. Molecular modeling and diffraction-simulation studies of this polymer, coupled with data from the single-crystal structure of an oligomer model, have enabled the crystal and molecular structure of the polymer to be determined from X-ray powder data. This structure-the first for any naphthalene-containing poly(ether ketone)-is fully ordered, in monoclinic space group P2(1)/b, with two chains per unit cell. Rietveld refinement against the experimental powder data gave a final agreement factor (R-wp) of 6.7%.
Resumo:
The work reported in this paper is motivated by the need to investigate general methods for pattern transformation. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some agents in the pattern are introduced. The need for a mathematical tool and simulations for visualizing the behavior of a transformation method is highlighted. A mathematical method based on the Moebius transformation is proposed. The transformation method involves discretization of events for planning paths of individual robots in a pattern. Simulations on a particle physics simulator are used to validate the feasibility of the proposed method.
Resumo:
The work reported in this paper is motivated by the need to investigate general methods for pattern transformation. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some agents in the pattern are introduced. The need for a mathematical tool and simulations for visualizing the behavior of a transformation method is highlighted. A mathematical method based on the Moebius transformation is proposed. The transformation method involves discretization of events for planning paths of individual robots in a pattern. Simulations on a particle physics simulator are used to validate the feasibility of the proposed method.
Resumo:
Given that the next and current generation networks will coexist for a considerable period of time, it is important to improve the performance of existing networks. One such improvement recently proposed is to enhance the throughput of ad hoc networks by using dual-hop relay-based transmission schemes. Since in ad hoc networks throughput is normally related to their energy consumption, it is important to examine the impact of using relay-based transmissions on energy consumption. In this paper, we present an analytical energy consumption model for dual-hop relay-based medium access control (MAC) protocols. Based on the recently reported relay-enabled Distributed Coordination Function (rDCF), we have shown the efficacy of the proposed analytical model. This is a generalized model and can be used to predict energy consumption in saturated relay-based ad hoc networks. This model can predict energy consumption in ideal environment and with transmission errors. It is shown that using a relay results in not only better throughput but also better energy efficiency. Copyright (C) 2009 Rizwan Ahmad et al.
Resumo:
We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.
Resumo:
This paper describes the design and implementation of an agent based network for the support of collaborative switching tasks within the control room environment of the National Grid Company plc. This work includes aspects from several research disciplines, including operational analysis, human computer interaction, finite state modelling techniques, intelligent agents and computer supported co-operative work. Aspects of these procedures have been used in the analysis of collaborative tasks to produce distributed local models for all involved users. These models have been used as the basis for the production of local finite state automata. These automata have then been embedded within an agent network together with behavioural information extracted from the task and user analysis phase. The resulting support system is capable of task and communication management within the transmission despatch environment.
Resumo:
A new numerical modeling of inhaled charge aerosol has been developed based on a modified Weibel's model. Both the velocity profiles (slug and parabolic flows) and the particle distributions (uniform and parabolic distributions) have been considered. Inhaled particles are modeled as a dilute dispersed phase flow in which the particle motion is controlled by fluid force and external forces acting on particles. This numerical study extends the previous numerical studies by considering both space- and image-charge forces. Because of the complex computation of interacting forces due to space-charge effect, the particle-mesh (PM) method is selected to calculate these forces. In the PM technique, the charges of all particles are assigned to the space-charge field mesh, for calculating charge density. The Poisson's equation of the electrostatic potential is then solved, and the electrostatic force acting on individual particle is interpolated. It is assumed that there is no effect of humidity on charged particles. The results show that many significant factors also affect the deposition, such as the volume of particle cloud, the velocity profile and the particle distribution. This study allows a better understanding of electrostatic mechanism of aerosol transport and deposition in human airways.
Resumo:
The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.
Resumo:
In order to improve the quality of healthcare services, the integrated large-scale medical information system is needed to adapt to the changing medical environment. In this paper, we propose a requirement driven architecture of healthcare information system with hierarchical architecture. The system operates through the mapping mechanism between these layers and thus can organize functions dynamically adapting to user’s requirement. Furthermore, we introduce the organizational semiotics methods to capture and analyze user’s requirement through ontology chart and norms. Based on these results, the structure of user’s requirement pattern (URP) is established as the driven factor of our system. Our research makes a contribution to design architecture of healthcare system which can adapt to the changing medical environment.
Resumo:
The retrieval (estimation) of sea surface temperatures (SSTs) from space-based infrared observations is increasingly performed using retrieval coefficients derived from radiative transfer simulations of top-of-atmosphere brightness temperatures (BTs). Typically, an estimate of SST is formed from a weighted combination of BTs at a few wavelengths, plus an offset. This paper addresses two questions about the radiative transfer modeling approach to deriving these weighting and offset coefficients. How precisely specified do the coefficients need to be in order to obtain the required SST accuracy (e.g., scatter <0.3 K in week-average SST, bias <0.1 K)? And how precisely is it actually possible to specify them using current forward models? The conclusions are that weighting coefficients can be obtained with adequate precision, while the offset coefficient will often require an empirical adjustment of the order of a few tenths of a kelvin against validation data. Thus, a rational approach to defining retrieval coefficients is one of radiative transfer modeling followed by offset adjustment. The need for this approach is illustrated from experience in defining SST retrieval schemes for operational meteorological satellites. A strategy is described for obtaining the required offset adjustment, and the paper highlights some of the subtler aspects involved with reference to the example of SST retrievals from the imager on the geostationary satellite GOES-8.
Resumo:
This paper introduces and evaluates DryMOD, a dynamic water balance model of the key hydrological process in drylands that is based on free, public-domain datasets. The rainfall model of DryMOD makes optimal use of spatially disaggregated Tropical Rainfall Measuring Mission (TRMM) datasets to simulate hourly rainfall intensities at a spatial resolution of 1-km. Regional-scale applications of the model in seasonal catchments in Tunisia and Senegal characterize runoff and soil moisture distribution and dynamics in response to varying rainfall data inputs and soil properties. The results highlight the need for hourly-based rainfall simulation and for correcting TRMM 3B42 rainfall intensities for the fractional cover of rainfall (FCR). Without FCR correction and disaggregation to 1 km, TRMM 3B42 based rainfall intensities are too low to generate surface runoff and to induce substantial changes to soil moisture storage. The outcomes from the sensitivity analysis show that topsoil porosity is the most important soil property for simulation of runoff and soil moisture. Thus, we demonstrate the benefit of hydrological investigations at a scale, for which reliable information on soil profile characteristics exists and which is sufficiently fine to account for the heterogeneities of these. Where such information is available, application of DryMOD can assist in the spatial and temporal planning of water harvesting according to runoff-generating areas and the runoff ratio, as well as in the optimization of agricultural activities based on realistic representation of soil moisture conditions.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.