939 resultados para Individual-based modeling
Resumo:
Cpfg is a program for simulating and visualizing plant development, based on the theory of L-systems. A special-purpose programming language, used to specify plant models, is an essential feature of cpfg. We review postulates of L-system theory that have influenced the design of this language. We then present the main constructs of this language, and evaluate it from a user's perspective.
Resumo:
On the basis of a spatially distributed sediment budget across a large basin, costs of achieving certain sediment reduction targets in rivers were estimated. A range of investment prioritization scenarios were tested to identify the most cost-effective strategy to control suspended sediment loads. The scenarios were based on successively introducing more information from the sediment budget. The relationship between spatial heterogeneity of contributing sediment sources on cost effectiveness of prioritization was investigated. Cost effectiveness was shown to increase with sequential introduction of sediment budget terms. The solution which most decreased cost was achieved by including spatial information linking sediment sources to the downstream target location. This solution produced cost curves similar to those derived using a genetic algorithm formulation. Appropriate investment prioritization can offer large cost savings because the magnitude of the costs can vary by several times depending on what type of erosion source or sediment delivery mechanism is targeted. Target settings which only consider the erosion source rates can potentially result in spending more money than random management intervention for achieving downstream targets. Coherent spatial patterns of contributing sediment emerge from the budget model and its many inputs. The heterogeneity in these patterns can be summarized in a succinct form. This summary was shown to be consistent with the cost difference between local and regional prioritization for three of four test catchments. To explain the effect for the fourth catchment, the detail of the individual sediment sources needed to be taken into account.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.
Resumo:
We consider two viral strains competing against each other within individual hosts (at cellular level) and at population level (for infecting hosts) by studying two cases. In the first case, the strains do not mutate into each other. In this case, we found that each individual in the population can be infected by only one strain and that co-existence in the population is possible only when the strain that has the greater basic intracellular reproduction number, R (0c) , has the smaller population number R (0p) . Treatment against the one strain shifts the population equilibrium toward the other strain in a complicated way (see Appendix B). In the second case, we assume that the strain that has the greater intracellular number R (0c) can mutate into the other strain. In this case, individual hosts can be simultaneously infected by both strains (co-existence within the host). Treatment shifts the prevalence of the two strains within the hosts, depending on the mortality induced by the treatment, which is, in turn, dependent upon the doses given to each individual. The relative proportions of the strains at the population level, under treatment, depend both on the relative proportions within the hosts (which is determined by the dosage of treatment) and on the number of individuals treated per unit time, that is, the rate of treatment. Implications for cases of real diseases are briefly discussed.
Resumo:
A thermodynamic approach is developed in this paper to describe the behavior of a subcritical fluid in the neighborhood of vapor-liquid interface and close to a graphite surface. The fluid is modeled as a system of parallel molecular layers. The Helmholtz free energy of the fluid is expressed as the sum of the intrinsic Helmholtz free energies of separate layers and the potential energy of their mutual interactions calculated by the 10-4 potential. This Helmholtz free energy is described by an equation of state (such as the Bender or Peng-Robinson equation), which allows us a convenient means to obtain the intrinsic Helmholtz free energy of each molecular layer as a function of its two-dimensional density. All molecular layers of the bulk fluid are in mechanical equilibrium corresponding to the minimum of the total potential energy. In the case of adsorption the external potential exerted by the graphite layers is added to the free energy. The state of the interface zone between the liquid and the vapor phases or the state of the adsorbed phase is determined by the minimum of the grand potential. In the case of phase equilibrium the approach leads to the distribution of density and pressure over the transition zone. The interrelation between the collision diameter and the potential well depth was determined by the surface tension. It was shown that the distance between neighboring molecular layers substantially changes in the vapor-liquid transition zone and in the adsorbed phase with loading. The approach is considered in this paper for the case of adsorption of argon and nitrogen on carbon black. In both cases an excellent agreement with the experimental data was achieved without additional assumptions and fitting parameters, except for the fluid-solid potential well depth. The approach has far-reaching consequences and can be readily extended to the model of adsorption in slit pores of carbonaceous materials and to the analysis of multicomponent adsorption systems. (C) 2002 Elsevier Science (USA).
Resumo:
A thermodynamic approach based on the Bender equation of state is suggested for the analysis of supercritical gas adsorption on activated carbons at high pressure. The approach accounts for the equality of the chemical potential in the adsorbed phase and that in the corresponding bulk phase and the distribution of elements of the adsorption volume (EAV) over the potential energy for gas-solid interaction. This scheme is extended to subcritical fluid adsorption and takes into account the phase transition in EAV The method is adapted to gravimetric measurements of mass excess adsorption and has been applied to the adsorption of argon, nitrogen, methane, ethane, carbon dioxide, and helium on activated carbon Norit R I in the temperature range from 25 to 70 C. The distribution function of adsorption volume elements over potentials exhibits overlapping peaks and is consistently reproduced for different gases. It was found that the distribution function changes weakly with temperature, which was confirmed by its comparison with the distribution function obtained by the same method using nitrogen adsorption isotherm at 77 K. It was shown that parameters such as pore volume and skeleton density can be determined directly from adsorption measurements, while the conventional approach of helium expansion at room temperature can lead to erroneous results due to the adsorption of helium in small pores of activated carbon. The approach is a convenient tool for analysis and correlation of excess adsorption isotherms over a wide range of pressure and temperature. This approach can be readily extended to the analysis of multicomponent adsorption systems. (C) 2002 Elsevier Science (USA).
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
Polymers have become the reference material for high reliability and performance applications. In this work, a multi-scale approach is proposed to investigate the mechanical properties of polymeric based material under strain. To achieve a better understanding of phenomena occurring at the smaller scales, a coupling of a Finite Element Method (FEM) and Molecular Dynamics (MD) modeling in an iterative procedure was employed, enabling the prediction of the macroscopic constitutive response. As the mechanical response can be related to the local microstructure, which in turn depends on the nano-scale structure, the previous described multi-scale method computes the stress-strain relationship at every analysis point of the macro-structure by detailed modeling of the underlying micro- and meso-scale deformation phenomena. The proposed multi-scale approach can enable prediction of properties at the macroscale while taking into consideration phenomena that occur at the mesoscale, thus offering an increased potential accuracy compared to traditional methods.
Resumo:
The aim of this paper is to establish some basic guidelines to help draft the information letter sent to individual contributors should it be decided to use this model in the Spanish public pension system. With this end in mind and basing our work on the experiences of the most advanced countries in the field and the pioneering papers by Jackson (2005), Larsson et al. (2008) and Sunden (2009), we look into the concept of “individual pension information” and identify its most relevant characteristics. We then give a detailed description of two models, those in the United States and Sweden, and in particular look at how they are structured, what aspects could be improved and what their limitations are. Finally we make some recommendations of special interest for designing the model for Spain.
Resumo:
This paper aims to present a multi-agent model for a simulation, whose goal is to help one specific participant of multi-criteria group decision making process.This model has five main intervenient types: the human participant, who is using the simulation and argumentation support system; the participant agents, one associated to the human participant and the others simulating the others human members of the decision meeting group; the directory agent; the proposal agents, representing the different alternatives for a decision (the alternatives are evaluated based on criteria); and the voting agent responsiblefor all voting machanisms.At this stage it is proposed a two phse algorithm. In the first phase each participantagent makes his own evaluation of the proposals under discussion, and the voting agent proposes a simulation of a voting process.In the second phase, after the dissemination of the voting results,each one ofthe partcipan agents will argue to convince the others to choose one of the possible alternatives. The arguments used to convince a specific participant are dependent on agent knowledge about that participant. This two-phase algorithm is applied iteratively.
Resumo:
The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.
Resumo:
Biometric recognition is emerging has an alternative solution for applications where the privacy of the information is crucial. This paper presents an embedded biometric recognition system based on the Electrocardiographic signals (ECG) for individual identification and authentication. The proposed system implements a real-time state-of-the-art recognition algorithm, which extracts information from the frequency domain. The system is based on a ARM Cortex 4. Preliminary results show that embedded platforms are a promising path for the implementation of ECG-based applications in real-world scenario.
Resumo:
The future of health care delivery is becoming more citizen-centred, as today’s user is more active, better informed and more demanding. The European Commission is promoting online health services and, therefore, member states will need to boost deployment and use of online services. This makes e-health adoption an important field to be studied and understood. This study applied the extended unified theory of acceptance and usage technology (UTAUT2) to explain patients’ individual adoption of e-health. An online questionnaire was administrated Portugal using mostly the same instrument used in UTAUT2 adapted to e-health context. We collected 386 valid answers. Performance expectancy, effort expectancy, social influence, and habit had the most significant explanatory power over behavioural intention and habit and behavioural intention over technology use. The model explained 52% of the variance in behavioural intention and 32% of the variance in technology use. Our research helps to understand the desired technology characteristics of ehealth. By testing an information technology acceptance model, we are able to determine what is more valued by patients when it comes to deciding whether to adopt e-health systems or not.
Resumo:
This study aims to replicate Apple’s stock market movement by modeling major investment profiles and investors. The present model recreates a live exchange to forecast any predictability in stock price variation, knowing how investors act when it concerns investment decisions. This methodology is particularly relevant if, just by observing historical prices and knowing the tendencies in other players’ behavior, risk-adjusted profits can be made. Empirical research made in the academia shows that abnormal returns are hardly consistent without a clear idea of who is in the market in a given moment and the correspondent market shares. Therefore, even when knowing investors’ individual investment profiles, it is not clear how they affect aggregate markets.