969 resultados para Simulation experiments
Resumo:
The identification of signatures of natural selection in genomic surveys has become an area of intense research, stimulated by the increasing ease with which genetic markers can be typed. Loci identified as subject to selection may be functionally important, and hence (weak) candidates for involvement in disease causation. They can also be useful in determining the adaptive differentiation of populations, and exploring hypotheses about speciation. Adaptive differentiation has traditionally been identified from differences in allele frequencies among different populations, summarised by an estimate of F-ST. Low outliers relative to an appropriate neutral population-genetics model indicate loci subject to balancing selection, whereas high outliers suggest adaptive (directional) selection. However, the problem of identifying statistically significant departures from neutrality is complicated by confounding effects on the distribution of F-ST estimates, and current methods have not yet been tested in large-scale simulation experiments. Here, we simulate data from a structured population at many unlinked, diallelic loci that are predominantly neutral but with some loci subject to adaptive or balancing selection. We develop a hierarchical-Bayesian method, implemented via Markov chain Monte Carlo (MCMC), and assess its performance in distinguishing the loci simulated under selection from the neutral loci. We also compare this performance with that of a frequentist method, based on moment-based estimates of F-ST. We find that both methods can identify loci subject to adaptive selection when the selection coefficient is at least five times the migration rate. Neither method could reliably distinguish loci under balancing selection in our simulations, even when the selection coefficient is twenty times the migration rate.
Resumo:
The effect of episodic drought on dissolved organic carbon (DOC) dynamics in peatlands has been the subject of considerable debate, as decomposition and DOC production is thought to increase under aerobic conditions, yet decreased DOC concentrations have been observed during drought periods. Decreased DOC solubility due to drought-induced acidification driven by sulphur (S) redox reactions has been proposed as a causal mechanism; however evidence is based on a limited number of studies carried out at a few sites. To test this hypothesis on a range of different peats, we carried out controlled drought simulation experiments on peat cores collected from six sites across Great Britain. Our data show a concurrent increase in sulphate (SO4) and a decrease in DOC across all sites during simulated water table draw-down, although the magnitude of the relationship between SO4 and DOC differed between sites. Instead, we found a consistent relationship across all sites between DOC decrease and acidification measured by the pore water acid neutralising capacity (ANC). ANC provided a more consistent measure of drought-induced acidification than SO4 alone because it accounts for differences in base cation and acid anions concentrations between sites. Rewetting resulted in rapid DOC increases without a concurrent increase in soil respiration, suggesting DOC changes were primarily controlled by soil acidity not soil biota. These results highlight the need for an integrated analysis of hydrologically driven chemical and biological processes in peatlands to improve our understanding and ability to predict the interaction between atmospheric pollution and changing climatic conditions from plot to regional and global scales.
Resumo:
The present study investigates the growth of error in baroclinic waves. It is found that stable or neutral waves are particularly sensitive to errors in the initial condition. Short stable waves are mainly sensitive to phase errors and the ultra long waves to amplitude errors. Analysis simulation experiments have indicated that the amplitudes of the very long waves become usually too small in the free atmosphere, due to the sparse and very irregular distribution of upper air observations. This also applies to the four-dimensional data assimilation experiments, since the amplitudes of the very long waves are usually underpredicted. The numerical experiments reported here show that if the very long waves have these kinds of amplitude errors in the upper troposphere or lower stratosphere the error is rapidly propagated (within a day or two) to the surface and to the lower troposphere.
Resumo:
This paper presents an image motion model for airborne three-line-array (TLA) push-broom cameras. Both aircraft velocity and attitude instability are taken into account in modeling image motion. Effects of aircraft pitch, roll, and yaw on image motion are analyzed based on geometric relations in designated coordinate systems. The image motion is mathematically modeled by image motion velocity multiplied by exposure time. Quantitative analysis to image motion velocity is then conducted in simulation experiments. The results have shown that image motion caused by aircraft velocity is space invariant while image motion caused by aircraft attitude instability is more complicated. Pitch,roll and yaw all contribute to image motion to different extents. Pitch dominates the along-track image motion and both roll and yaw greatly contribute to the cross-track image motion. These results provide a valuable base for image motion compensation to ensure high accuracy imagery in aerial photogrammetry.
Resumo:
The theory of homogeneous barotropic beta-plane turbulence is here extended to include effects arising from spatial inhomogeneity in the form of a zonal shear flow. Attention is restricted to the geophysically important case of zonal flows that are barotropically stable and are of larger scale than the resulting transient eddy field. Because of the presumed scale separation, the disturbance enstrophy is approximately conserved in a fully nonlinear sense, and the (nonlinear) wave-mean-flow interaction may be characterized as a shear-induced spectral transfer of disturbance enstrophy along lines of constant zonal wavenumber k. In this transfer the disturbance energy is generally not conserved. The nonlinear interactions between different disturbance components are turbulent for scales smaller than the inverse of Rhines's cascade-arrest scale κβ[identical with] (β0/2urms)½ and in this regime their leading-order effect may be characterized as a tendency to spread the enstrophy (and energy) along contours of constant total wavenumber κ [identical with] (k2 + l2)½. Insofar as this process of turbulent isotropization involves spectral transfer of disturbance enstrophy across lines of constant zonal wavenumber k, it can be readily distinguished from the shear-induced transfer which proceeds along them. However, an analysis in terms of total wavenumber K alone, which would be justified if the flow were homogeneous, would tend to mask the differences. The foregoing theoretical ideas are tested by performing direct numerical simulation experiments. It is found that the picture of classical beta-plane turbulence is altered, through the effect of the large-scale zonal flow, in the following ways: (i) while the turbulence is still confined to K Kβ, the disturbance field penetrates to the largest scales of motion; (ii) the larger disturbance scales K < Kβ exhibit a tendency to meridional rather than zonal anisotropy, namely towards v2 > u2 rather than vice versa; (iii) the initial spectral transfer rate away from an isotropic intermediate-scale source is significantly enhanced by the shear-induced transfer associated with straining by the zonal flow. This last effect occurs even when the large-scale shear appears weak to the energy-containing eddies, in the sense that dU/dy [double less-than sign] κ for typical eddy length and velocity scales.
Resumo:
The near-neutral model of B chromosome evolution predicts that the invasion of a new population should last some tens of generations, but the details on how it proceeds in real populations are mostly unknown. Trying to fill this gap, we analyze here a natural population of the grasshopper Eyprepocnemis plorans at three time points during the last 35 years. Our results show that B chromosome frequency increased significantly during this period, and that a cline observed in 1992 had disappeared in 2012 once B frequency reached an upper limit in all sites sampled. This indicates that, during B chromosome invasion, at microgeographic scale, transient clines for B frequency are formed at the invasion front. Computer simulation experiments showed that the pattern of change observed for genotypic frequencies is consistent with the existence of B chromosome drive through females and selection against individuals with high number of B chromosomes.
Resumo:
Georeferencing is one of the major tasks of satellite-borne remote sensing. Compared to traditional indirect methods, direct georeferencing through a Global Positioning System/inertial navigation system requires fewer and simpler steps to obtain exterior orientation parameters of remotely sensed images. However, the pixel shift caused by geographic positioning error, which is generally derived from boresight angle as well as terrain topography variation, can have a great impact on the precision of georeferencing. The distribution of pixel shifts introduced by the positioning error on a satellite linear push-broom image is quantitatively analyzed. We use the variation of the object space coordinate to simulate different kinds of positioning errors and terrain topography. Then a total differential method was applied to establish a rigorous sensor model in order to mathematically obtain the relationship between pixel shift and positioning error. Finally, two simulation experiments are conducted using the imaging parameters of Chang’ E-1 satellite to evaluate two different kinds of positioning errors. The experimental results have shown that with the experimental parameters, the maximum pixel shift could reach 1.74 pixels. The proposed approach can be extended to a generic application for imaging error modeling in remote sensing with terrain variation.
Resumo:
Trust is one of the most important factors that influence the successful application of network service environments, such as e-commerce, wireless sensor networks, and online social networks. Computation models associated with trust and reputation have been paid special attention in both computer societies and service science in recent years. In this paper, a dynamical computation model of reputation for B2C e-commerce is proposed. Firstly, conceptions associated with trust and reputation are introduced, and the mathematical formula of trust for B2C e-commerce is given. Then a dynamical computation model of reputation is further proposed based on the conception of trust and the relationship between trust and reputation. In the proposed model, classical varying processes of reputation of B2C e-commerce are discussed. Furthermore, the iterative trust and reputation computation models are formulated via a set of difference equations based on the closed-loop feedback mechanism. Finally, a group of numerical simulation experiments are performed to illustrate the proposed model of trust and reputation. Experimental results show that the proposed model is effective in simulating the dynamical processes of trust and reputation for B2C e-commerce.
Resumo:
Land use leads to massive habitat destruction and fragmentation in tropical forests. Despite its global dimensions the effects of fragmentation on ecosystem dynamics are not well understood due to the complexity of the problem. We present a simulation analysis performed by the individual-based model FORMIND. The model was applied to the Brazilian Atlantic Forest, one of the world`s biodiversity hot spots, at the Plateau of Sao Paulo. This study investigates the long-term effects of fragmentation processes on structure and dynamics of different sized remnant tropical forest fragments (1-100 ha) at community and plant functional type (PFT) level. We disentangle the interplay of single effects of different key fragmentation processes (edge mortality, increased mortality of large trees, local seed loss and external seed rain) using simulation experiments in a full factorial design. Our analysis reveals that particularly small forest fragments below 25 ha suffer substantial structural changes, biomass and biodiversity loss in the long term. At community level biomass is reduced up to 60%. Two thirds of the mid- and late-successional species groups, especially shade-tolerant (late successional climax) species groups are prone of extinction in small fragments. The shade-tolerant species groups were most strongly affected; its tree number was reduced more than 60% mainly by increased edge mortality. This process proved to be the most powerful of those investigated, explaining alone more than 80% of the changes observed for this group. External seed rain was able to compensate approximately 30% of the observed fragmentation effects for shade-tolerant species. Our results suggest that tropical forest fragments will suffer strong structural changes in the long term, leading to tree species impoverishment. They may reach a new equilibrium with a substantially reduced subset of the initial species pool, and are driven towards an earlier successional state. The natural regeneration potential of a landscape scattered with forest fragments appears to be limited, as external seed rain is not able to fully compensate for the observed fragmentation-induced changes. Our findings suggest basic recommendations for the management of fragmented tropical forest landscapes. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
There is a family of well-known external clustering validity indexes to measure the degree of compatibility or similarity between two hard partitions of a given data set, including partitions with different numbers of categories. A unified, fully equivalent set-theoretic formulation for an important class of such indexes was derived and extended to the fuzzy domain in a previous work by the author [Campello, R.J.G.B., 2007. A fuzzy extension of the Rand index and other related indexes for clustering and classification assessment. Pattern Recognition Lett., 28, 833-841]. However, the proposed fuzzy set-theoretic formulation is not valid as a general approach for comparing two fuzzy partitions of data. Instead, it is an approach for comparing a fuzzy partition against a hard referential partition of the data into mutually disjoint categories. In this paper, generalized external indexes for comparing two data partitions with overlapping categories are introduced. These indexes can be used as general measures for comparing two partitions of the same data set into overlapping categories. An important issue that is seldom touched in the literature is also addressed in the paper, namely, how to compare two partitions of different subsamples of data. A number of pedagogical examples and three simulation experiments are presented and analyzed in details. A review of recent related work compiled from the literature is also provided. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
The aim of this work is to evaluate the fuzzy system for different types of patients for levodopa infusion in Parkinson Disease based on simulation experiments using the pharmacokinetic-pharmacodynamic model. Fuzzy system is to control patient’s condition by adjusting the value of flow rate, and it must be effective on three types of patients, there are three different types of patients, including sensitive, typical and tolerant patient; the sensitive patients are very sensitive to drug dosage, but the tolerant patients are resistant to drug dose, so it is important for controller to deal with dose increment and decrement to adapt different types of patients, such as sensitive and tolerant patients. Using the fuzzy system, three different types of patients can get useful control for simulating medication treatment, and controller will get good effect for patients, when the initial flow rate of infusion is in the small range of the approximate optimal value for the current patient’ type.
Resumo:
Backgound and aims: The main purpose of the PEDAL study is to identify and estimate sample individual pharmacokinetic- pharmacodynamic (PK/PD) models for duodenal infusion of levodopa/carbidopa (Duodopa®) that can be used for in numero simulation of treatment strategies. Other objectives are to study the absorption of Duodopa® and to form a basis for power calculation for a future larger study. PK/PD based on oral levodopa is problematic because of irregular gastric emptying. Preliminary work with data from [Gundert-Remy U et al. Eur J Clin Pharmacol 1983;25:69-72] suggested that levodopa infusion pharmacokinetics can be described by a two-compartment model. Background research led to a hypothesis for an effect model incorporating concentration-unrelated fluctuations, more complex than standard E-max models. Methods: PEDAL involved a few patients already on Duodopa®. A bolus dose (normal morning dose plus 50%) was given after a washout during night. Data collection continued until the clinical effect was back at baseline. The procedure was repeated on two non-consecutive days per patient. The following data were collected in 5 to 15 minutes intervals: i) Accelerometer data. ii) Three e-diary questions about ability to walk, feelings of “off” and “dyskinesia”. iii) Clinical assessment of motor function by a physician. iv) Plasma concentrations of levodopa, carbidopa and the metabolite 3-O-methyldopa. The main effect variable will be the clinical assessment. Results: At date of abstract submission, lab analyses were currently being performed. Modelling results, simulation experiments and conclusions will be presented in our poster.
Resumo:
Objective Levodopa in presence of decarboxylase inhibitors is following two-compartment kinetics and its effect is typically modelled using sigmoid Emax models. Pharmacokinetic modelling of the absorption phase of oral distributions is problematic because of irregular gastric emptying. The purpose of this work was to identify and estimate a population pharmacokinetic- pharmacodynamic model for duodenal infusion of levodopa/carbidopa (Duodopa®) that can be used for in numero simulation of treatment strategies. Methods The modelling involved pooling data from two studies and fixing some parameters to values found in literature (Chan et al. J Pharmacokinet Pharmacodyn. 2005 Aug;32(3-4):307-31). The first study involved 12 patients on 3 occasions and is described in Nyholm et al. Clinical Neuropharmacology 2003:26:156-63. The second study, PEDAL, involved 3 patients on 2 occasions. A bolus dose (normal morning dose plus 50%) was given after a washout during night. Plasma samples and motor ratings (clinical assessment of motor function from video recordings on a treatment response scale between -3 and 3, where -3 represents severe parkinsonism and 3 represents severe dyskinesia.) were repeatedly collected until the clinical effect was back at baseline. At this point, the usual infusion rate was started and sampling continued for another two hours. Different structural absorption models and effect models were evaluated using the value of the objective function in the NONMEM package. Population mean parameter values, standard error of estimates (SE) and if possible, interindividual/interoccasion variability (IIV/IOV) were estimated. Results Our results indicate that Duodopa absorption can be modelled with an absorption compartment with an added bioavailability fraction and a lag time. The most successful effect model was of sigmoid Emax type with a steep Hill coefficient and an effect compartment delay. Estimated parameter values are presented in the table. Conclusions The absorption and effect models were reasonably successful in fitting observed data and can be used in simulation experiments.
Resumo:
Cooperation is the fundamental underpinning of multi-agent systems, allowing agents to interact to achieve their goals. Where agents are self-interested, or potentially unreliable, there must be appropriate mechanisms to cope with the uncertainty that arises. In particular, agents must manage the risk associated with interacting with others who have different objectives, or who may fail to fulfil their commitments. Previous work has utilised the notions of motivation and trust in engendering successful cooperation between self-interested agents. Motivations provide a means for representing and reasoning about agents' overall objectives, and trust offers a mechanism for modelling and reasoning about reliability, honesty, veracity and so forth. This paper extends that work to address some of its limitations. In particular, we introduce the concept of a clan: a group of agents who trust each other and have similar objectives. Clan members treat each other favourably when making private decisions about cooperation, in order to gain mutual benefit. We describe mechanisms for agents to form, maintain, and dissolve clans in accordance with their self-interested nature, along with giving details of how clan membership influences individual decision making. Finally, through some simulation experiments we illustrate the effectiveness of clan formation in addressing some of the inherent problems with cooperation among self-interested agents.
Resumo:
The task of controlling urban traffic requires flexibility, adaptability and handling uncertain information spread through the intersection network. The use of fuzzy sets concepts convey these characteristics to improve system performance. This paper reviews a distributed traffic control system built upon a fuzzy distributed architecture previously developed by the authors. The emphasis of the paper is on the application of the system to control part of Campinas downtown area. Simulation experiments considering several traffic scenarios were performed to verify the capabilities of the system in controlling a set of coupled intersections. The performance of the proposed system is compared with conventional traffic control strategies under the same scenarios. The results obtained show that the distributed traffic control system outperforms conventional systems as far as average queues, average delay and maximum delay measures are concerned.