771 resultados para Agent-Based Models
Resumo:
Introduction Risk factor analyses for nosocomial infections (NIs) are complex. First, due to competing events for NI, the association between risk factors of NI as measured using hazard rates may not coincide with the association using cumulative probability (risk). Second, patients from the same intensive care unit (ICU) who share the same environmental exposure are likely to be more similar with regard to risk factors predisposing to a NI than patients from different ICUs. We aimed to develop an analytical approach to account for both features and to use it to evaluate associations between patient- and ICU-level characteristics with both rates of NI and competing risks and with the cumulative probability of infection. Methods We considered a multicenter database of 159 intensive care units containing 109,216 admissions (813,739 admission-days) from the Spanish HELICS-ENVIN ICU network. We analyzed the data using two models: an etiologic model (rate based) and a predictive model (risk based). In both models, random effects (shared frailties) were introduced to assess heterogeneity. Death and discharge without NI are treated as competing events for NI. Results There was a large heterogeneity across ICUs in NI hazard rates, which remained after accounting for multilevel risk factors, meaning that there are remaining unobserved ICU-specific factors that influence NI occurrence. Heterogeneity across ICUs in terms of cumulative probability of NI was even more pronounced. Several risk factors had markedly different associations in the rate-based and risk-based models. For some, the associations differed in magnitude. For example, high Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were associated with modest increases in the rate of nosocomial bacteremia, but large increases in the risk. Others differed in sign, for example respiratory vs cardiovascular diagnostic categories were associated with a reduced rate of nosocomial bacteremia, but an increased risk. Conclusions A combination of competing risks and multilevel models is required to understand direct and indirect risk factors for NI and distinguish patient-level from ICU-level factors.
Resumo:
This paper presents a layered framework for the purposes of integrating different Socio-Technical Systems (STS) models and perspectives into a whole-of-systems model. Holistic modelling plays a critical role in the engineering of STS due to the interplay between social and technical elements within these systems and resulting emergent behaviour. The framework decomposes STS models into components, where each component is either a static object, dynamic object or behavioural object. Based on existing literature, a classification of the different elements that make up STS, whether it be a social, technical or a natural environment element, is developed; each object can in turn be classified according to the STS elements it represents. Using the proposed framework, it is possible to systematically decompose models to an extent such that points of interface can be identified and the contextual factors required in transforming the component of one model to interface into another is obtained. Using an airport inbound passenger facilitation process as a case study socio-technical system, three different models are analysed: a Business Process Modelling Notation (BPMN) model, Hybrid Queue-based Bayesian Network (HQBN) model and an Agent Based Model (ABM). It is found that the framework enables the modeller to identify non-trivial interface points such as between the spatial interactions of an ABM and the causal reasoning of a HQBN, and between the process activity representation of a BPMN and simulated behavioural performance in a HQBN. Such a framework is a necessary enabler in order to integrate different modelling approaches in understanding and managing STS.
Resumo:
This paper presents simulation results for future electricity grids using an agent-based model developed with MODAM (MODular Agent-based Model). MODAM is introduced and its use demonstrated through four simulations based on a scenario that expects a rise of on-site renewable generators and electric vehicles (EV) usage. The simulations were run over many years, for two areas in Townsville, Australia, capturing variability in space of the technology uptake, and for two charging methods for EV, capturing people's behaviours and their impact on the time of the peak load. Impact analyses of these technologies were performed over the areas, down to the distribution transformer level, where greater variability of their contribution to the assets peak load was observed. The MODAM models can be used for different purposes such as impact of renewables on grid sizing, or on greenhouse gas emissions. The insights gained from using MODAM for technology assessment are discussed.
Resumo:
Process improvement and innovation are risky endeavors, like swimming in unknown waters. In this chapter, I will discuss how process innovation through BPM can benefit from Research-as-a-Service, that is, from the application of research concepts in the processes of BPM projects. A further subject will be how innovations can be converted from confidence-based to evidence-based models due to affordances of digital infrastructures such as large-scale enterprise soft-ware or social media. I will introduce the relevant concepts, provide illustrations for digital capabilities that allow for innovation, and share a number of key takeaway lessons for how organizations can innovate on the basis of digital opportunities and principles of evidence-based BPM: the foundation of all process decisions in facts rather than fiction.
Resumo:
Bone metastasis is a complication that occurs in 80 % of women with advanced breast cancer. Despite the prevalence of bone metastatic disease, the avenues for its clinical management are still restricted to palliative treatment options. In fact, the underlying mechanisms of breast cancer osteotropism have not yet been fully elucidated due to a lack of suitable in vivo models that are able to recapitulate the human disease. In this work, we review the current transplantation-based models to investigate breast cancer-induced bone metastasis and delineate the strengths and limitations of the use of different grafting techniques, tissue sources, and hosts. We further show that humanized xenograft models incorporating human cells or tissue grafts at the primary tumor site or the metastatic site mimic more closely the human disease. Tissue-engineered constructs are emerging as a reproducible alternative to recapitulate functional humanized tissues in these murine models. The development of advanced humanized animal models may provide better platforms to investigate the mutual interactions between human cancer cells and their microenvironment and ultimately improve the translation of preclinical drug trials to the clinic.
Resumo:
In this article, we describe and compare two individual-based models constructed to investigate how genetic factors influence the development of phosphine resistance in lesser grain borer (R. dominica). One model is based on the simplifying assumption that resistance is conferred by alleles at a single locus, while the other is based on the more realistic assumption that resistance is conferred by alleles at two separate loci. We simulated the population dynamic of R. dominica in the absence of phosphine fumigation, and under high and low dose phosphine treatments, and found important differences between the predictions of the two models in all three cases. In the absence of fumigation, starting from the same initial frequencies of genotypes, the two models tended to different stable frequencies, although both reached Hardy-Weinberg equilibrium. The one-locus model exaggerated the equilibrium proportion of strongly resistant beetles by 3.6 times, compared to the aggregated predictions of the two-locus model. Under a low dose treatment the one-locus model overestimated the proportion of strongly resistant individuals within the population and underestimated the total population numbers compared to the two-locus model. These results show the importance of basing resistance evolution models on realistic genetics and that using oversimplified one-locus models to develop pest control strategies runs the risk of not correctly identifying tactics to minimise the incidence of pest infestation.
Resumo:
Multi- and intralake datasets of fossil midge assemblages in surface sediments of small shallow lakes in Finland were studied to determine the most important environmental factors explaining trends in midge distribution and abundance. The aim was to develop palaeoenvironmental calibration models for the most important environmental variables for the purpose of reconstructing past environmental conditions. The developed models were applied to three high-resolution fossil midge stratigraphies from southern and eastern Finland to interpret environmental variability over the past 2000 years, with special focus on the Medieval Climate Anomaly (MCA), the Little Ice Age (LIA) and recent anthropogenic changes. The midge-based results were compared with physical properties of the sediment, historical evidence and environmental reconstructions based on diatoms (Bacillariophyta), cladocerans (Crustacea: Cladocera) and tree rings. The results showed that the most important environmental factor controlling midge distribution and abundance along a latitudinal gradient in Finland was the mean July air temperature (TJul). However, when the dataset was environmentally screened to include only pristine lakes, water depth at the sampling site became more important. Furthermore, when the dataset was geographically scaled to southern Finland, hypolimnetic oxygen conditions became the dominant environmental factor. The results from an intralake dataset from eastern Finland showed that the most important environmental factors controlling midge distribution within a lake basin were river contribution, water depth and submerged vegetation patterns. In addition, the results of the intralake dataset showed that the fossil midge assemblages represent fauna that lived in close proximity to the sampling sites, thus enabling the exploration of within-lake gradients in midge assemblages. Importantly, this within-lake heterogeneity in midge assemblages may have effects on midge-based temperature estimations, because samples taken from the deepest point of a lake basin may infer considerably colder temperatures than expected, as shown by the present test results. Therefore, it is suggested here that the samples in fossil midge studies involving shallow boreal lakes should be taken from the sublittoral, where the assemblages are most representative of the whole lake fauna. Transfer functions between midge assemblages and the environmental forcing factors that were significantly related with the assemblages, including mean air TJul, water depth, hypolimnetic oxygen, stream flow and distance to littoral vegetation, were developed using weighted averaging (WA) and weighted averaging-partial least squares (WA-PLS) techniques, which outperformed all the other tested numerical approaches. Application of the models in downcore studies showed mostly consistent trends. Based on the present results, which agreed with previous studies and historical evidence, the Medieval Climate Anomaly between ca. 800 and 1300 AD in eastern Finland was characterized by warm temperature conditions and dry summers, but probably humid winters. The Little Ice Age (LIA) prevailed in southern Finland from ca. 1550 to 1850 AD, with the coldest conditions occurring at ca. 1700 AD, whereas in eastern Finland the cold conditions prevailed over a longer time period, from ca. 1300 until 1900 AD. The recent climatic warming was clearly represented in all of the temperature reconstructions. In the terms of long-term climatology, the present results provide support for the concept that the North Atlantic Oscillation (NAO) index has a positive correlation with winter precipitation and annual temperature and a negative correlation with summer precipitation in eastern Finland. In general, the results indicate a relatively warm climate with dry summers but snowy winters during the MCA and a cool climate with rainy summers and dry winters during the LIA. The results of the present reconstructions and the forthcoming applications of the models can be used in assessments of long-term environmental dynamics to refine the understanding of past environmental reference conditions and natural variability required by environmental scientists, ecologists and policy makers to make decisions concerning the presently occurring global, regional and local changes. The developed midge-based models for temperature, hypolimnetic oxygen, water depth, littoral vegetation shift and stream flow, presented in this thesis, are open for scientific use on request.
Resumo:
Prickly acacia (Vachellia nilotica subsp. indica), a native multipurpose tree in India, is a weed of National significance, and a target for biological control in Australia. Based on plant genetic and climatic similarities, native range surveys for identifying potential biological control agents for prickly acacia were conducted in India during 2008-2011. In the survey leaf-feeding geometrid, Isturgia disputaria Guenee (syn. Tephrina pulinda), widespread in Tamil Nadu and Karnataka States, was prioritized as a potential biological control agent based on field host range, damage potential and no choice test on non target plant species. Though the field host range study exhibited that V. nilotica ssp. indica and V. nilotica ssp. tomentosa were the primary hosts for successful development of the insect, I. disputaria, replicated no - choice larval feeding and development tests conducted on cut foliage and live plants of nine non-target acacia test plant species in India revealed the larval feeding and development on three of the nine non-target acacia species, V. tortilis, V. planiferons and V. leucophloea in addition to the V. nilotica ssp. indica and V. nilotica ssp. tomentosa. However, the proportion of larvae developing into adults was higher on V. nilotica subsp. indica and V. nilotica subsp. tomentosa, with 90% and 80% of the larvae completing development, respectively. In contrast, the larval mortality was higher on V. tortilis (70%), V. leucophloea (90%) and V. planiferons (70%). The no-choice test results support the earlier host specificity test results of I. disputaria from Pakistan, Kenya and under quarantine in Australia. Contrasting results between field host range and host use pattern under no-choice conditions are discussed.
Resumo:
Frictions are factors that hinder trading of securities in financial markets. Typical frictions include limited market depth, transaction costs, lack of infinite divisibility of securities, and taxes. Conventional models used in mathematical finance often gloss over these issues, which affect almost all financial markets, by arguing that the impact of frictions is negligible and, consequently, the frictionless models are valid approximations. This dissertation consists of three research papers, which are related to the study of the validity of such approximations in two distinct modeling problems. Models of price dynamics that are based on diffusion processes, i.e., continuous strong Markov processes, are widely used in the frictionless scenario. The first paper establishes that diffusion models can indeed be understood as approximations of price dynamics in markets with frictions. This is achieved by introducing an agent-based model of a financial market where finitely many agents trade a financial security, the price of which evolves according to price impacts generated by trades. It is shown that, if the number of agents is large, then under certain assumptions the price process of security, which is a pure-jump process, can be approximated by a one-dimensional diffusion process. In a slightly extended model, in which agents may exhibit herd behavior, the approximating diffusion model turns out to be a stochastic volatility model. Finally, it is shown that when agents' tendency to herd is strong, logarithmic returns in the approximating stochastic volatility model are heavy-tailed. The remaining papers are related to no-arbitrage criteria and superhedging in continuous-time option pricing models under small-transaction-cost asymptotics. Guasoni, Rásonyi, and Schachermayer have recently shown that, in such a setting, any financial security admits no arbitrage opportunities and there exist no feasible superhedging strategies for European call and put options written on it, as long as its price process is continuous and has the so-called conditional full support (CFS) property. Motivated by this result, CFS is established for certain stochastic integrals and a subclass of Brownian semistationary processes in the two papers. As a consequence, a wide range of possibly non-Markovian local and stochastic volatility models have the CFS property.
Resumo:
In this work we numerically model isothermal turbulent swirling flow in a cylindrical burner. Three versions of the RNG k-epsilon model are assessed against performance of the standard k-epsilon model. Sensitivity of numerical predictions to grid refinement, differing convective differencing schemes and choice of (unknown) inlet dissipation rate, were closely scrutinised to ensure accuracy. Particular attention is paid to modelling the inlet conditions to within the range of uncertainty of the experimental data, as model predictions proved to be significantly sensitive to relatively small changes in upstream flow conditions. We also examine the characteristics of the swirl--induced recirculation zone predicted by the models over an extended range of inlet conditions. Our main findings are: - (i) the standard k-epsilon model performed best compared with experiment; - (ii) no one inlet specification can simultaneously optimize the performance of the models considered; - (iii) the RNG models predict both single-cell and double-cell IRZ characteristics, the latter both with and without additional internal stagnation points. The first finding indicates that the examined RNG modifications to the standard k-e model do not result in an improved eddy viscosity based model for the prediction of swirl flows. The second finding suggests that tuning established models for optimal performance in swirl flows a priori is not straightforward. The third finding indicates that the RNG based models exhibit a greater variety of structural behaviour, despite being of the same level of complexity as the standard k-e model. The plausibility of the predicted IRZ features are discussed in terms of known vortex breakdown phenomena.
Resumo:
Multi-agent systems (MAS) advocate an agent-based approach to software engineering based on decomposing problems in terms of decentralized, autonomous agents that can engage in flexible, high-level interactions. This chapter introduces scalable fault tolerant agent grooming environment (SAGE), a second-generation Foundation for Intelligent Physical Agents (FIPA)-compliant multi-agent system developed at NIIT-Comtec, which provides an environment for creating distributed, intelligent, and autonomous entities that are encapsulated as agents. The chapter focuses on the highlight of SAGE, which is its decentralized fault-tolerant architecture that can be used to develop applications in a number of areas such as e-health, e-government, and e-science. In addition, SAGE architecture provides tools for runtime agent management, directory facilitation, monitoring, and editing messages exchange between agents. SAGE also provides a built-in mechanism to program agent behavior and their capabilities with the help of its autonomous agent architecture, which is the other major highlight of this chapter. The authors believe that the market for agent-based applications is growing rapidly, and SAGE can play a crucial role for future intelligent applications development. © 2007, IGI Global.
Resumo:
A health-monitoring and life-estimation strategy for composite rotor blades is developed in this work. The cross-sectional stiffness reduction obtained by physics-based models is expressed as a function of the life of the structure using a recent phenomenological damage model. This stiffness reduction is further used to study the behavior of measurable system parameters such as blade deflections, loads, and strains of a composite rotor blade in static analysis and forward flight. The simulated measurements are obtained using an aeroelastic analysis of the composite rotor blade based on the finite element in space and time with physics-based damage modes that are then linked to the life consumption of the blade. The model-based measurements are contaminated with noise to simulate real data. Genetic fuzzy systems are developed for global online prediction of physical damage and life consumption using displacement- and force-based measurement deviations between damaged and undamaged conditions. Furthermore, local online prediction of physical damage and life consumption is done using strains measured along the blade length. It is observed that the life consumption in the matrix-cracking zone is about 12-15% and life consumption in debonding/delamination zone is about 45-55% of the total life of the blade. It is also observed that the success rate of the genetic fuzzy systems depends upon the number of measurements, type of measurements and training, and the testing noise level. The genetic fuzzy systems work quite well with noisy data and are recommended for online structural health monitoring of composite helicopter rotor blades.
Resumo:
This paper deals with the quasi-static and dynamic mechanical analysis of montmorillonite filled polypropylene composites. Nanocomposites were prepared by blending montmorillonite (nanoclay) varying from 3 to 9% by weight with polypropylene. The dynamic mechanical properties such as storage modulus, loss modulus and mechanical loss factor of PP and nano-composites were investigated by varying temperature and frequencies. Results showed better mechanical and thermomechanical properties at higher concentration of nanoclay. Regression-based models through design of experiments (DOE) were developed to find the storage modulus and compared with theoretical models and DOE-based models.
Resumo:
The smart grid is a highly complex system that is being formed from the traditional power grid, adding new and sophisticated communication and control devices. This will enable integrating new elements for distributed power generation and also achieving an increasingly automated operation so for actions of the utilities as for customers. In order to model such systems a bottom-up method is followed, using only a few basic elements which are structured into two layers: a physical layer for the electrical power transmission, and one logical layer for element communication. A simple case study is presented to analyse the possibilities of simulation. It shows a microgrid model with dynamic load management and an integrated approach that can process both electrical and communication flows.
Resumo:
EFTA 2009