818 resultados para GIS BASED SIMULATION


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Serious games are a category of games which are designed for a specific purpose other than for pure entertainment. It is not a new concept but serious games using real data, coupled with real time modelling and combining model results with social and economic factors opens up a new paradigm for active stakeholder participation. DHI and UNEP-DHI Centre initiated a project called Aqua Republica where a virtual world is developed which allows participants to develop a river basin and visualise the consequences of their decisions. The aim of this project is to raise awareness of the interconnectivity of water and educate on integrated water resources management. Aqua Republica combines a game layer with a water allocation model, MIKE BASIN, to create an interactive, realistic virtual environment where players play the role of a catchment manager of an undeveloped river catchment. Their main objective is to develop the river catchment to be as prosperous as it can be. To achieve that, they will need to generate a good economy in the catchment to provide the funds needed for development, have a steady food supply for their population and enough energy and water for the catchment. Through these actions by the player, a meaningful play is established to engage players and to educate them about the complex relationships between developmental actions in a river basin and the natural environment as well as their consequences. The game layer also consists of a reward system to encourage learning. People can play and replay the game, get rewarded from performing the right principles and penalised from failures in the game. This abstract will explain the concept of the game and how it has been used in a stakeholder participation environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Devido à necessidade de permanecer competitiva e rentável, a indústriada construção civil tem mudado a forma de conduzir seus negócios e adaptado filosofias gerenciais para a suarealidade, como é o casodo Just-in-Time (JIT)e dos Sistemas da Oualidade (por exemplo: TOM e TOC). Dentro deste contexto, encontra-se a aplicação da Teoria das Restrições (TR) no processo de Planejamento e Controle da Produção (PCP) de obras de edificação. A Teoria das Restrições é uma filosofia gerencial fundamentada no conceito de restrição e em como gerenciar as restrições identificadas, fornecendo: (a) um procedimento para os responsáveis pela produção identificarem onde e como concentrar esforços e (b) um processo de melhoria contínua para o sistema produtivo como um todo, buscando sempre o ótimo globalantes do local. O desconhecimento dos possíveis benefícios e dificuldades decorrentes da adaptação e aplicação desta teoria na indústria da construção civil fez com que a aplicação da Teoria das Restrições fosse simulada, usando a técnica STROBOSCOPE (Stateand Resource Based Simulation of Construction Processes), em um empreendimento para analisar os impactos que prazos e custos podem sofrer durante as aplicações em casos reais. Estas simulações mostraram que a Teoria das Restrições e suas técnicas e ferramentas são aplicáveis ao processo de PCP, reduzindo os atrasos e os custos por atraso que um empreendimento pode sofrer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Oil wells subjected to cyclic steam injection present important challenges for the development of well cementing systems, mainly due to tensile stresses caused by thermal gradients during its useful life. Cement sheath failures in wells using conventional high compressive strength systems lead to the use of cement systems that are more flexible and/or ductile, with emphasis on Portland cement systems with latex addition. Recent research efforts have presented geopolymeric systems as alternatives. These cementing systems are based on alkaline activation of amorphous aluminosilicates such as metakaolin or fly ash and display advantageous properties such as high compressive strength, fast setting and thermal stability. Basic geopolymeric formulations can be found in the literature, which meet basic oil industry specifications such as rheology, compressive strength and thickening time. In this work, new geopolymeric formulations were developed, based on metakaolin, potassium silicate, potassium hydroxide, silica fume and mineral fiber, using the state of the art in chemical composition, mixture modeling and additivation to optimize the most relevant properties for oil well cementing. Starting from molar ratios considered ideal in the literature (SiO2/Al2O3 = 3.8 e K2O/Al2O3 = 1.0), a study of dry mixtures was performed,based on the compressive packing model, resulting in an optimal volume of 6% for the added solid material. This material (silica fume and mineral fiber) works both as an additional silica source (in the case of silica fume) and as mechanical reinforcement, especially in the case of mineral fiber, which incremented the tensile strength. The first triaxial mechanical study of this class of materials was performed. For comparison, a mechanical study of conventional latex-based cementing systems was also carried out. Regardless of differences in the failure mode (brittle for geopolymers, ductile for latex-based systems), the superior uniaxial compressive strength (37 MPa for the geopolymeric slurry P5 versus 18 MPa for the conventional slurry P2), similar triaxial behavior (friction angle 21° for P5 and P2) and lower stifness (in the elastic region 5.1 GPa for P5 versus 6.8 GPa for P2) of the geopolymeric systems allowed them to withstand a similar amount of mechanical energy (155 kJ/m3 for P5 versus 208 kJ/m3 for P2), noting that geopolymers work in the elastic regime, without the microcracking present in the case of latex-based systems. Therefore, the geopolymers studied on this work must be designed for application in the elastic region to avoid brittle failure. Finally, the tensile strength of geopolymers is originally poor (1.3 MPa for the geopolymeric slurry P3) due to its brittle structure. However, after additivation with mineral fiber, the tensile strength became equivalent to that of latex-based systems (2.3 MPa for P5 and 2.1 MPa for P2). The technical viability of conventional and proposed formulations was evaluated for the whole well life, including stresses due to cyclic steam injection. This analysis was performed using finite element-based simulation software. It was verified that conventional slurries are viable up to 204ºF (400ºC) and geopolymeric slurries are viable above 500ºF (260ºC)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Factors influencing the location decisions of offices include traffic, accessibility, employment conditions, economic prospects and land-use policies. Hence tools for supporting real-estate managers and urban planners in such multidimensional decisions may be useful. Accordingly, the objective of this study is to develop a GIS-based tool to support firms who seek office accommodation within a given regional or national study area. The tool relies on a matching approach, in which a firm's characteristics (demand) on the one hand, and environmental conditions and available office spaces (supply) on the other, are analyzed separately in a first step, after which a match is sought. That is, a suitability score is obtained for every firm and for every available office space by applying some value judgments (satisfaction, utility etc.). The latter are powered by a focus on location aspects and expert knowledge about the location decisions of firms/organizations with respect to office accommodation as acquired from a group of real-estate advisers; it is stored in decision tables, and they constitute the core of the model. Apart from the delineation of choice sets for any firm seeking a location, the tool supports two additional types of queries. Firstly, it supports the more generic problem of optimally allocating firms to a set of vacant locations. Secondly, the tool allows users to find firms which meet the characteristics of any given location. Moreover, as a GIS-based tool, its results can be visualized using GIS features which, in turn, facilitate several types of analyses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Entwicklung des Nordwestdeutschen Beckens und seiner rezenten Topographie ist geprägt von einer Vielzahl endogener und exogener Prozesse: Tektonik, Vulkanismus, Diapirismus, Eisvorstöße, elsterzeitlichen Rinnen und die Ablagerung von quartären Sedimenten. Mit Hilfe der Quantifizierung von Bodenbewegungspotenzialen wurde für Schleswig-Holstein der Einfluß von Tiefenstrukturen (insbesondere Salzstrukturen und tektonische Störungen) auf die Entwicklung der rezenten Topographie in Schleswig-Holstein untersucht. Dabei wurden folgende Parameter berücksichtigt: (1) Salzstrukturen; (2) Tektonischen Störungen; (3) Oberflächennahe Störungen, die mit einer hohen Wahrscheinlichkeit an der Erdoberfläche ausstreichen; (4) Elsterzeitliche Rinnen (tiefer 100 m); (5) Historische Erdbeben; (6) In Satellitenbildszenen kartierte Lineamente (7) Korrelationskoeffizienten, die zwischen 7 stratigraphischen Horizonten des „Geotektonischen Atlas von NW-Deutschland“ berechnet wurden. Die Ergebnisse zeigen, dass in Schleswig-Holstein großflächig rezente Bodenbewegungs-potenziale auftreten, die auf tektonische Störungen und Salzstrukturen zurückzuführen sind und sich hauptsächlich auf den Bereich des Glückstadt Grabens beschränken. In den 5 Gebieten Sterup, Tellingstedt Nord, Oldensworth Nord, Schwarzenbek und Plön treten die höchsten Bodenbewegungspotenziale auf. Sie dokumentieren rezente Prozesse in diesen Gebieten. In den Gebieten Sterup, Schwarzenbek und Plön sind aktive, an der Erdoberfläche ausstreichende Störungen lokalisiert, deren Auftreten auch durch kartierte Luft- und Satellitenbildlineare belegt wird. Im Gebiet Plön werden die ermittelten Bodenbewegungspotenziale durch eine, sich rezent vergrößernde Senke bei Kleinneudorf bestätigt. Unterhalb der Senke führen, begünstigt durch tektonische Störungen, Lösungsprozesse in tertiären Sedimenten zu Hohlraumbildungen, die das rezente Absacken der Senke verursachen. Für Bereiche höchsten Bodenbewegungspotenzials kann ein Einfluß von Tiefenstrukturen auf die Entwicklung der rezenten Topographie nachgewiesen werden. So beeinflussen oberflächennahe Störungen in dem Gebiet Plön die Entwicklung des Plöner Sees. Im Gebiet Schwarzenbek verursacht ein N-S orientiertes Störungsband ein Abknicken des Elbverlaufs. Weiterhin kann ein Einfluß der Entwicklung der rezenten Topographie durch eine Interaktion zwischen Eisauflast und Salzmobilität in den Gebieten Sterup und Oldensworth nachgewiesen werden. Demnach ist die Ablagerung quartärer Sedimente und somit der Grenzverlauf der Flußgebietseinheiten Eider und Schlei-Trave zwischen den Salzstrukturen Sterup und Meezen beeinflusst durch eine aktive Reaktion beider Salzstrukturen auf Eisauflast. Im Bereich Oldensworth zeigen geologische Schnitte von der Basis Oberkreide bis zur rezenten Topographie, dass die Salzmauern Oldensworth und Hennstedt die Ablagerung quartärer Sedimente aktiv beeinflussten. Weiterhin orientiert sich der Elbverlauf von Hamburg bis zur Mündung an den Randbereichen von Salzstrukturen, die bis in den oberflächennahen Bereich aufgestiegen sind.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The biogenic production of NO in the soil accounts for between 10% and 40% of the global total. A large degree of the uncertainty in the estimation of the biogenic emissions stems from a shortage of measurements in arid regions, which comprise 40% of the earth’s land surface area. This study examined the emission of NO from three ecosystems in southern Africa which cover an aridity gradient from semi-arid savannas in South Africa to the hyper-arid Namib Desert in Namibia. A laboratory method was used to determine the release of NO as a function of the soil moisture and the soil temperature. Various methods were used to up-scale the net potential NO emissions determined in the laboratory to the vegetation patch, landscape or regional level. The importance of landscape, vegetation and climatic characteristics is emphasized. The first study occurred in a semi-arid savanna region in South Africa, where soils were sampled from 4 landscape positions in the Kruger National Park. The maximum NO emission occurred at soil moisture contents of 10%-20% water filled pore space (WFPS). The highest net potential NO emissions came from the low lying landscape positions, which have the largest nitrogen (N) stocks and the largest input of N. Net potential NO fluxes obtained in the laboratory were converted in field fluxes for the period 2003-2005, for the four landscape positions, using soil moisture and temperature data obtained in situ at the Kruger National Park Flux Tower Site. The NO emissions ranged from 1.5-8.5 kg ha-1 a-1. The field fluxes were up-scaled to a regional basis using geographic information system (GIS) based techniques, this indicated that the highest NO emissions occurred from the Midslope positions due to their large geographical extent in the research area. Total emissions ranged from 20x103 kg in 2004 to 34x103 kg in 2003 for the 56000 ha Skukuza land type. The second study occurred in an arid savanna ecosystem in the Kalahari, Botswana. In this study I collected soils from four differing vegetation patch types including: Pan, Annual Grassland, Perennial Grassland and Bush Encroached patches. The maximum net potential NO fluxes ranged from 0.27 ng m-2 s-1 in the Pan patches to 2.95 ng m-2 s-1 in the Perennial Grassland patches. The net potential NO emissions were up-scaled for the year December 2005-November 2006. This was done using 1) the net potential NO emissions determined in the laboratory, 2) the vegetation patch distribution obtained from LANDSAT NDVI measurements 3) estimated soil moisture contents obtained from ENVISAT ASAR measurements and 4) soil surface temperature measurements using MODIS 8 day land surface temperature measurements. This up-scaling procedure gave NO fluxes which ranged from 1.8 g ha-1 month-1 in the winter months (June and July) to 323 g ha-1 month-1 in the summer months (January-March). Differences occurred between the vegetation patches where the highest NO fluxes occurred in the Perennial Grassland patches and the lowest in the Pan patches. Over the course of the year the mean up-scaled NO emission for the studied region was 0.54 kg ha-1 a-1 and accounts for a loss of approximately 7.4% of the estimated N input to the region. The third study occurred in the hyper-arid Namib Desert in Namibia. Soils were sampled from three ecosystems; Dunes, Gravel Plains and the Riparian zone of the Kuiseb River. The net potential NO flux measured in the laboratory was used to estimate the NO flux for the Namib Desert for 2006 using modelled soil moisture and temperature data from the European Centre for Medium Range Weather Forecasts (ECMWF) operational model on a 36km x 35km spatial resolution. The maximum net potential NO production occurred at low soil moisture contents (<10%WFPS) and the optimal temperature was 25°C in the Dune and Riparian ecosystems and 35°C in the Gravel Plain Ecosystems. The maximum net potential NO fluxes ranged from 3.0 ng m-2 s-1 in the Riparian ecosystem to 6.2 ng m-2 s-1 in the Gravel Plains ecosystem. Up-scaling the net potential NO flux gave NO fluxes of up to 0.062 kg ha-1 a-1 in the Dune ecosystem and 0.544 kg h-1 a-1 in the Gravel Plain ecosystem. From these studies it is shown that NO is emitted ubiquitously from terrestrial ecosystems, as such the NO emission potential from deserts and scrublands should be taken into account in the global NO models. The emission of NO is influenced by various factors such as landscape, vegetation and climate. This study looks at the potential emissions from certain arid and semi-arid environments in southern Africa and other parts of the world and discusses some of the important factors controlling the emission of NO from the soil.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La ricerca ha preso in esame l’analisi archeologica di un territorio medievale e la sperimentazione di strumenti informatici per la gestione e l’analisi dei dati prodotti dalla ricerca stessa. Il Montalbano, oggetto della ricerca, è una microregione caratterizzata da elementi che la rendono molto interessante. Si tratta di una catena submontana che divide la piana di Firenze-Prato-Pistoia dal Valdarno inferiore. Questa posizione di frontiera ne ha fatto l’oggetto di mire espansionistiche da parte delle principali famiglie signorili prima, dei comuni poi. In una prima fase sono stati censiti i siti attestati dalle fonti documentarie e materiali per capire le dinamiche insediative del popolamento medievale e le strategie di controllo di un territorio caratterizzato dall’assenza di un’egemonia da parte di un solo potere (almeno fino a metà ‘300). L’analisi stratigrafica si è poi concentrata sulle strutture architettoniche religiose, in quanto offrono la maggior quantità di dati dal punto di vista documentario e archeologico. È stato così possibile ottenere un quadro delle tecniche costruttive medievali e delle influenze culturali che lo hanno prodotto. I dati archeologici sono stati gestiti attraverso una piattaforma gis sviluppata all’interno del Laboratorio di Archeologia Medievale dell’Università di Firenze in collaborazione con il laboratorio LSIS del CNRS di Marsiglia. Questa è stata appositamente strutturata secondo le procedure di raccolta e organizzazione dati utilizzate durante l’analisi archeologica. Le singole strutture indagate sono inoltre state oggetto di un rilievo 3d fotogrammetrico che in alcuni casi studio è stato anche utilizzato come base di accesso ai dati derivanti dall’analisi stratigrafica, all’interno di un’applicazione gis 3d (Arpenteur). Questo ha permesso di connettere all’interno di un’unica piattaforma i dati geometrici ed archeometrici con quelli archeologici, utilizzando i primi come interfaccia di accesso ai secondi.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1 We used simulated and experimental plant populations to analyse mortality-driven pattern formation under size-dependent competition. Larger plants had an advantage under size-asymmetric but not under symmetric competition. Initial patterns were random or clumped. 2 The simulations were individual-based and spatially explicit. Size-dependent competition was modelled with different rules to partition overlapping zones of influence. 3 The experiment used genotypes of Arabidopsis thaliana with different morphological plasticity and hence size-dependent competition. Compared with wild types, transgenic individuals over-expressed phytochrome A and had decreased plasticity because of disabled phytochrome-mediated shade avoidance. Therefore, competition among transgenics was more asymmetric compared with wild-types. 4 Density-dependent mortality under symmetric competition did not substantially change the initial spatial pattern. Conversely, simulations under asymmetric competition and experimental patterns of transgenic over-expressors showed patterns of survivors that deviated substantially from random mortality independent of initial patterns. 5 Small-scale initial patterns of wild types were regular rather than random or clumped. We hypothesize that this small-scale regularity may be explained by early shade avoidance of seedlings in their cotyledon stage. 6 Our experimental results support predictions from an individual-based simulation model and support the conclusion that regular spatial patterns of surviving individuals should be interpreted as evidence for strong, asymmetric competitive interactions and subsequent density-dependent mortality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE To estimate the cost-effectiveness of prevention of mother-to-child transmission (MTCT) of HIV with lifelong antiretroviral therapy (ART) for pregnant and breastfeeding women ('Option B+') compared with ART during pregnancy or breastfeeding only unless clinically indicated ('Option B'). DESIGN Mathematical modelling study of first and second pregnancy, informed by data from the Malawi Option B+ programme. METHODS Individual-based simulation model. We simulated cohorts of 10 000 women and their infants during two subsequent pregnancies, including the breastfeeding period, with either Option B+ or B. We parameterized the model with data from the literature and by analysing programmatic data. We compared total costs of antenatal and postnatal care, and lifetime costs and disability-adjusted life-years of the infected infants between Option B+ and Option B. RESULTS During the first pregnancy, 15% of the infants born to HIV-infected mothers acquired the infection. With Option B+, 39% of the women were on ART at the beginning of the second pregnancy, compared with 18% with Option B. For second pregnancies, the rates MTCT were 11.3% with Option B+ and 12.3% with Option B. The incremental cost-effectiveness ratio comparing the two options ranged between about US$ 500 and US$ 1300 per DALY averted. CONCLUSION Option B+ prevents more vertical transmissions of HIV than Option B, mainly because more women are already on ART at the beginning of the next pregnancy. Option B+ is a cost-effective strategy for PMTCT if the total future costs and lost lifetime of the infected infants are taken into account.