29 resultados para Based structure model
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Tropical forests are carbon-dense and highly productive ecosystems. Consequently, they play an important role in the global carbon cycle. In the present study we used an individual-based forest model (FORMIND) to analyze the carbon balances of a tropical forest. The main processes of this model are tree growth, mortality, regeneration, and competition. Model parameters were calibrated using forest inventory data from a tropical forest at Mt. Kilimanjaro. The simulation results showed that the model successfully reproduces important characteristics of tropical forests (aboveground biomass, stem size distribution and leaf area index). The estimated aboveground biomass (385 t/ha) is comparable to biomass values in the Amazon and other tropical forests in Africa. The simulated forest reveals a gross primary production of 24 tcha-1yr-1. Modeling above- and belowground carbon stocks, we analyzed the carbon balance of the investigated tropical forest. The simulated carbon balance of this old-growth forest is zero on average. This study provides an example of how forest models can be used in combination with forest inventory data to investigate forest structure and local carbon balances.
Resumo:
A new physics-based technique for correcting inhomogeneities present in sub-daily temperature records is proposed. The approach accounts for changes in the sensor-shield characteristics that affect the energy balance dependent on ambient weather conditions (radiation, wind). An empirical model is formulated that reflects the main atmospheric processes and can be used in the correction step of a homogenization procedure. The model accounts for short- and long-wave radiation fluxes (including a snow cover component for albedo calculation) of a measurement system, such as a radiation shield. One part of the flux is further modulated by ventilation. The model requires only cloud cover and wind speed for each day, but detailed site-specific information is necessary. The final model has three free parameters, one of which is a constant offset. The three parameters can be determined, e.g., using the mean offsets for three observation times. The model is developed using the example of the change from the Wild screen to the Stevenson screen in the temperature record of Basel, Switzerland, in 1966. It is evaluated based on parallel measurements of both systems during a sub-period at this location, which were discovered during the writing of this paper. The model can be used in the correction step of homogenization to distribute a known mean step-size to every single measurement, thus providing a reasonable alternative correction procedure for high-resolution historical climate series. It also constitutes an error model, which may be applied, e.g., in data assimilation approaches.
Resumo:
For popular software systems, the number of daily submitted bug reports is high. Triaging these incoming reports is a time consuming task. Part of the bug triage is the assignment of a report to a developer with the appropriate expertise. In this paper, we present an approach to automatically suggest developers who have the appropriate expertise for handling a bug report. We model developer expertise using the vocabulary found in their source code contributions and compare this vocabulary to the vocabulary of bug reports. We evaluate our approach by comparing the suggested experts to the persons who eventually worked on the bug. Using eight years of Eclipse development as a case study, we achieve 33.6\% top-1 precision and 71.0\% top-10 recall.
Resumo:
PURPOSE Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). METHODS This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. RESULTS For 15 × 34, 5 × 5, and 2 × 2 cm(2) fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. CONCLUSIONS The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
Resumo:
The quantification of the structural properties of snow is traditionally based on model-based stereology. Model-based stereology requires assumptions about the shape of the investigated structure. Here, we show how the density, specific surface area, and grain boundary area can be measured using a design-based method, where no assumptions about structural properties are necessary. The stereological results were also compared to X-ray tomography to control the accuracy of the method. The specific surface area calculated with the stereological method was 19.8 ± 12.3% smaller than with X-ray tomography. For the density, the stereological method gave results that were 11.7 ± 12.1% larger than X-ray tomography. The statistical analysis of the estimates confirmed that the stereological method and the sampling used are accurate. This stereological method was successfully tested on artificially produced ice beads but also on several snow types. Combining stereology and polarisation microscopy provides a good estimate of grain boundary areas in ice beads and in natural snow, with some limitatio
Evaluation of control and surveillance strategies for classical swine fever using a simulation model
Resumo:
Classical swine fever (CSF) outbreaks can cause enormous losses in naïve pig populations. How to best minimize the economic damage and number of culled animals caused by CSF is therefore an important research area. The baseline CSF control strategy in the European Union and Switzerland consists of culling all animals in infected herds, movement restrictions for animals, material and people within a given distance to the infected herd and epidemiological tracing of transmission contacts. Additional disease control measures such as pre-emptive culling or vaccination have been recommended based on the results from several simulation models; however, these models were parameterized for areas with high animal densities. The objective of this study was to explore whether pre-emptive culling and emergency vaccination should also be recommended in low- to moderate-density areas such as Switzerland. Additionally, we studied the influence of initial outbreak conditions on outbreak severity to improve the efficiency of disease prevention and surveillance. A spatial, stochastic, individual-animal-based simulation model using all registered Swiss pig premises in 2009 (n=9770) was implemented to quantify these relationships. The model simulates within-herd and between-herd transmission (direct and indirect contacts and local area spread). By varying the four parameters (a) control measures, (b) index herd type (breeding, fattening, weaning or mixed herd), (c) detection delay for secondary cases during an outbreak and (d) contact tracing probability, 112 distinct scenarios were simulated. To assess the impact of scenarios on outbreak severity, daily transmission rates were compared between scenarios. Compared with the baseline strategy (stamping out and movement restrictions) vaccination and pre-emptive culling neither reduced outbreak size nor duration. Outbreaks starting in a herd with weaning piglets or fattening pigs caused higher losses regarding to the number of culled premises and were longer lasting than those starting in the two other index herd types. Similarly, larger transmission rates were estimated for these index herd type outbreaks. A longer detection delay resulted in more culled premises and longer duration and better transmission tracing increased the number of short outbreaks. Based on the simulation results, baseline control strategies seem sufficient to control CSF in low-medium animal-dense areas. Early detection of outbreaks is crucial and risk-based surveillance should be focused on weaning piglet and fattening pig premises.
Resumo:
We develop statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles. We focus on the case where particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle shape and orientation, and we derive stereological estimators of the tensors. These estimators are combined to provide consistent estimators of the moments of the so-called particle cover density. The covariance structure associated with the particle cover density depends on the orientation and shape of the particles. For instance, if the distribution of the typical particle is invariant under rotations, then the covariance matrix is proportional to the identity matrix. We develop a non-parametric test for such isotropy. A flexible Lévy-based particle model is proposed, which may be analysed using a generalized method of moments in which the volume tensors enter. The developed methods are used to study the cell organization in the human brain cortex.
Resumo:
The phenomenon of portfolio entrepreneurship has attracted considerable scholarly attention and is particularly relevant in the family fi rm context. However, there is a lack of knowledge of the process through which portfolio entrepreneurship develops in family firms. We address this gap by analyzing four in-depth, longitudinal family firm case studies from Europe and Latin America. Using a resource-based perspective, we identify six distinct resource categories that are relevant to the portfolio entrepreneurship process. Furthermore, we reveal that their importance varies across time. Our resulting resource-based process model of portfolio entrepreneurship in family firms makes valuable contributions to both theory and practice.
Resumo:
Argillaceous formations generally act as aquitards because of their low hydraulic conductivities. This property, together with the large retention capacity of clays for cationic contaminants, has brought argillaceous formations into focus as potential host rocks for the geological disposal of radioactive and other waste. In several countries, programmes are under way to characterise the detailed transport properties of such formations at depth. In this context, the interpretation of profiles of natural tracers in pore waters across the formations can give valuable information about the large-scale and long-term transport behaviour of these formations. Here, tracer-profile data, obtained by various methods of pore-water extraction for nine sites in central Europe, are compiled. Data at each site comprise some or all of the conservative tracers: anions (Cl(-), Br(-)), water isotopes (delta(18)O, delta(2)H) and noble gases (mainly He). Based on a careful evaluation of the palaeo-hydrogeological evolution at each site, model scenarios are derived for initial and boundary pore-water compositions and an attempt is made to numerically reproduce the observed tracer distributions in a consistent way for all tracers and sites, using transport parameters derived from laboratory or in situ tests. The comprehensive results from this project have been reported in Mazurek et al. (2009). Here the results for three sites are presented in detail, but the conclusions are based on model interpretations of the entire data set. In essentially all cases, the shapes of the profiles can be explained by diffusion acting as the dominant transport process over periods of several thousands to several millions of years and at the length scales of the profiles. Transport by advection has a negligible influence on the observed profiles at most sites, as can be shown by estimating the maximum advection velocities that still give acceptable fits of the model with the data. The advantages and disadvantages of different conservative tracers are also assessed. The anion Cl(-) is well suited as a natural tracer in aquitards, because its concentration varies considerably in environmental waters. It can easily be measured, although the uncertainty regarding the fraction of the pore space that is accessible to anions in clays remains an issue. The stable water isotopes are also well suited, but they are more difficult to measure and their values generally exhibit a smaller relative range of variation. Chlorine isotopes (delta(37)Cl) and He are more difficult to interpret because initial and boundary conditions cannot easily be constrained by independent evidence. It is also shown that the existence of perturbing events such as the activation of aquifers due to uplift and erosion, leading to relatively sharp changes of boundary conditions, can be considered as a pre-requisite to obtain well-interpretable tracer signatures. On the other hand, gradual changes of boundary conditions are more difficult to parameterise and so may preclude a clear interpretation.
Resumo:
Dynamic models for electrophoresis are based upon model equations derived from the transport concepts in solution together with user-inputted conditions. They are able to predict theoretically the movement of ions and are as such the most versatile tool to explore the fundamentals of electrokinetic separations. Since its inception three decades ago, the state of dynamic computer simulation software and its use has progressed significantly and Electrophoresis played a pivotal role in that endeavor as a large proportion of the fundamental and application papers were published in this periodical. Software is available that simulates all basic electrophoretic systems, including moving boundary electrophoresis, zone electrophoresis, ITP, IEF and EKC, and their combinations under almost exactly the same conditions used in the laboratory. This has been employed to show the detailed mechanisms of many of the fundamental phenomena that occur in electrophoretic separations. Dynamic electrophoretic simulations are relevant for separations on any scale and instrumental format, including free-fluid preparative, gel, capillary and chip electrophoresis. This review includes a historical overview, a survey of current simulators, simulation examples and a discussion of the applications and achievements of dynamic simulation.
Resumo:
A lack of quantitative high resolution paleoclimate data from the Southern Hemisphere limits the ability to examine current trends within the context of long-term natural climate variability. This study presents a temperature reconstruction for southern Tasmania based on analyses of a sediment core from Duckhole Lake (43.365°S, 146.875°E). The relationship between non-destructive whole core scanning reflectance spectroscopy measurements in the visible spectrum (380–730 nm) and the instrumental temperature record (ad 1911–2000) was used to develop a calibration-in-time reflectance spectroscopy-based temperature model. Results showed that a trough in reflectance from 650 to 700 nm, which represents chlorophyll and its derivatives, was significantly correlated to annual mean temperature. A calibration model was developed (R = 0.56, p auto < 0.05, root mean squared error of prediction (RMSEP) = 0.21°C, five-year filtered data, calibration period 1911–2000) and applied down-core to reconstruct annual mean temperatures in southern Tasmania over the last c. 950 years. This indicated that temperatures were initially cool c. ad 1050, but steadily increased until the late ad 1100s. After a brief cool period in the ad 1200s, temperatures again increased. Temperatures steadily decreased during the ad 1600s and remained relatively stable until the start of the 20th century when they rapidly decreased, before increasing from ad 1960s onwards. Comparisons with high resolution temperature records from western Tasmania, New Zealand and South America revealed some similarities, but also highlighted differences in temperature variability across the mid-latitudes of the Southern Hemisphere. These are likely due to a combination of factors including the spatial variability in climate between and within regions, and differences between records that document seasonal (i.e. warm season/late summer) versus annual temperature variability. This highlights the need for further records from the mid-latitudes of the Southern Hemisphere in order to constrain past natural spatial and seasonal/annual temperature variability in the region, and to accurately identify and attribute changes to natural variability and/or anthropogenic activities.
Resumo:
The present paper introduces the topical area of the Polish-Swiss research project FLORIST (Flood risk on the northern foothills of the Tatra Mountains), informs on its objectives, and reports on initial results. The Tatra Mountains are the area of the highest precipitation in Poland and largely contribute to flood generation. The project is focused around four competence clusters: observation-based climatology, model-based climate change projections and impact assessment, dendrogeomorphology, and impact of large wood debris on fluvial processes. The knowledge generated in the FLORIST project is likely to have impact on understanding and interpretation of flood risk on the northern foothills of the Tatra Mountains, in the past, present, and future. It can help solving important practical problems related to flood risk reduction strategies and flood preparedness.
Resumo:
In recent years, there has been a renewed interest in the ecological consequences of individual trait variation within populations. Given that individual variability arises from evolutionary dynamics, to fully understand eco-evolutionary feedback loops, we need to pay special attention to how standing trait variability affects ecological dynamics. There is mounting empirical evidence that intra-specific phenotypic variation can exceed species-level means, but theoretical models of multi-trophic species coexistence typically neglect individual-level trait variability. What is needed are multispecies datasets that are resolved at the individual level that can be used to discriminate among alternative models of resource selection and species coexistence in food webs. Here, using one the largest individual-based datasets of a food web compiled to date, along with an individual trait-based stochastic model that incorporates Approximate Bayesian computation methods, we document intra-population variation in the strength of prey selection by different classes or predator phenotypes which could potentially alter the diversity and coexistence patterns of food webs. In particular, we found that strongly connected individual predators preferentially consumed common prey, whereas weakly connected predators preferentially selected rare prey. Such patterns suggest that food web diversity may be governed by the distribution of predator connectivity and individual trait variation in prey selection. We discuss the consequences of intra-specific variation in prey selection to assess fitness differences among predator classes (or phenotypes) and track longer term food web patterns of coexistence accounting for several phenotypes within each prey and predator species.
Resumo:
Identifying drivers of species diversity is a major challenge in understanding and predicting the dynamics of species-rich semi-natural grasslands. In particular in temperate grasslands changes in land use and its consequences, i.e. increasing fragmentation, the on-going loss of habitat and the declining importance of regional processes such as seed dispersal by livestock, are considered key drivers of the diversity loss witnessed within the last decades. It is a largely unresolved question to what degree current temperate grassland communities already reflect a decline of regional processes such as longer distance seed dispersal. Answering this question is challenging since it requires both a mechanistic approach to community dynamics and a sufficient data basis that allows identifying general patterns. Here, we present results of a local individual- and trait-based community model that was initialized with plant functional types (PFTs) derived from an extensive empirical data set of species-rich grasslands within the `Biodiversity Exploratories' in Germany. Driving model processes included above- and belowground competition, dynamic resource allocation to shoots and roots, clonal growth, grazing, and local seed dispersal. To test for the impact of regional processes we also simulated seed input from a regional species pool. Model output, with and without regional seed input, was compared with empirical community response patterns along a grazing gradient. Simulated response patterns of changes in PFT richness, Shannon diversity, and biomass production matched observed grazing response patterns surprisingly well if only local processes were considered. Already low levels of additional regional seed input led to stronger deviations from empirical community pattern. While these findings cannot rule out that regional processes other than those considered in the modeling study potentially play a role in shaping the local grassland communities, our comparison indicates that European grasslands are largely isolated, i.e. local mechanisms explain observed community patterns to a large extent.
Resumo:
Two new approaches to quantitatively analyze diffuse diffraction intensities from faulted layer stacking are reported. The parameters of a probability-based growth model are determined with two iterative global optimization methods: a genetic algorithm (GA) and particle swarm optimization (PSO). The results are compared with those from a third global optimization method, a differential evolution (DE) algorithm [Storn & Price (1997). J. Global Optim. 11, 341–359]. The algorithm efficiencies in the early and late stages of iteration are compared. The accuracy of the optimized parameters improves with increasing size of the simulated crystal volume. The wall clock time for computing quite large crystal volumes can be kept within reasonable limits by the parallel calculation of many crystals (clones) generated for each model parameter set on a super- or grid computer. The faulted layer stacking in single crystals of trigonal three-pointedstar- shaped tris(bicylco[2.1.1]hexeno)benzene molecules serves as an example for the numerical computations. Based on numerical values of seven model parameters (reference parameters), nearly noise-free reference intensities of 14 diffuse streaks were simulated from 1280 clones, each consisting of 96 000 layers (reference crystal). The parameters derived from the reference intensities with GA, PSO and DE were compared with the original reference parameters as a function of the simulated total crystal volume. The statistical distribution of structural motifs in the simulated crystals is in good agreement with that in the reference crystal. The results found with the growth model for layer stacking disorder are applicable to other disorder types and modeling techniques, Monte Carlo in particular.