779 resultados para Wood, Geoffrey B.: Sampling methods for multiresource forest inventory
Resumo:
The advent of multiparametric MRI has made it possible to change the way in which prostate biopsy is done, allowing to direct biopsies to suspicious lesions rather than randomly. The subject of this review relates to a computer-assisted strategy, the MRI/US fusion software-based targeted biopsy, and to its performance compared to the other sampling methods. Different devices with different methods to register MR images to live TRUS are currently in use to allow software-based targeted biopsy. Main clinical indications of MRI/US fusion software-based targeted biopsy are re-biopsy in men with persistent suspicious of prostate cancer after first negative standard biopsy and the follow-up of patients under active surveillance. Some studies have compared MRI/US fusion software-based targeted versus standard biopsy. In men at risk with MRI-suspicious lesion, targeted biopsy consistently detects more men with clinically significant disease as compared to standard biopsy; some studies have also shown decreased detection of insignificant disease. Only two studies directly compared MRI/US fusion software-based targeted biopsy with MRI/US fusion visual targeted biopsy, and the diagnostic ability seems to be in favor of the software approach. To date, no study comparing software-based targeted biopsy against in-bore MRI biopsy is available. The new software-based targeted approach seems to have the characteristics to be added in the standard pathway for achieving accurate risk stratification. Once reproducibility and cost-effectiveness will be verified, the actual issue will be to determine whether MRI/TRUS fusion software-based targeted biopsy represents anadd-on test or a replacement to standard TRUS biopsy.
Resumo:
Background. Hepatitis B virus (HBV) is an important cause of chronic viral disease worldwide and can be life threatening. While a safe and effective vaccine is widely available, 5 to 10% of healthy vaccinees fail to achieve a protective anti-hepatitis B surface antigen antibody (anti-HBs) titer (>10mIU/ml). A limited number of studies investigated host genetics of the response to HBV vaccine. To our knowledge, no comprehensive overview of genetic polymorphisms both within and outside the HLA system has been done so far. Aim. The aim of this study was to perform a systematic review of the literature of human genetics influencing immune response after hepatitis B vaccination. Methods. Literature searches using keywords were conducted in the electronic databases Medline, Embase and ISI Web of Science the cut-off date being March 2014. After selection of papers according to stringent inclusion criteria, relevant information was systematically collected from the remaining articles, including demographic data, number of patients, schedule and type of vaccine, phenotypes, genes and single nucleotide polymorphisms (SNPs) genotyping results and their association with immune response to hepatitis B vaccine. Results. The literature search produced a total of 1968 articles from which 46 studies were kept for further analyses. From these studies, data was extracted for 19 alleles from the human leukocyte antigen (HLA) region that were reported as significant at least twice. Among those alleles, 9 were firmly associated with vaccine response outcome (DQ2 [DQB1*02 and DQB1*0201], DR3 [DRB1*03 and DRB1*0301], DR7 [DRB1*07 and DRB1*0701], C4AQ0, DPB1*0401, DQ3, DQB1*06, DRB1*01 and DRB1*13 [DRB1*1301]). In addition, data was extracted for 55 different genes from which 13 extra-HLA genes had polymorphisms that were studied by different group of investigators or by the same group with a replication study. Among the 13 genes allowing comparison, 4 genes (IL-1B, IL-2, IL-4R and IL- 6) revealed no significant data, 6 genes (IL-4, IL-10, IL-12B, IL-13, TNFA, IFNG and TLR2) were explored with inconsistent results and 2 genes (CD3Z and ITGAL) yielded promising results as their association with vaccine response was confirmed by a replication approach. Furthermore, this review produced a list of 46 SNPs from 26 genes that were associated with immune response to vaccine only once, providing novel candidates to be tested in datasets from existing genome-wide association studies (GWAS). Conclusion. To the best of our knowledge, this is the first systematic review of immunogenetic studies of response to hepatitis B vaccine. While this work reassesses the role of several HLA alleles on vaccine response outcome, the associations with polymorphisms in genes outside the HLA region were rather inconsistent. Moreover, this work produced a list of 46 significant SNPs that were reported by a single group of investigators, opening up some interesting possibilities for further research.
Resumo:
In this paper we address the problem of extracting representative point samples from polygonal models. The goal of such a sampling algorithm is to find points that are evenly distributed. We propose star-discrepancy as a measure for sampling quality and propose new sampling methods based on global line distributions. We investigate several line generation algorithms including an efficient hardware-based sampling method. Our method contributes to the area of point-based graphics by extracting points that are more evenly distributed than by sampling with current algorithms
Resumo:
The purpose of this case study is to clarify how KM (knowledge management) capability is constructed through six different activities and to explore how this capability can be diagnosed and developed in the three case organizations. The study examines the knowledge management capability of the three factories in UPM-Kymmene Wood Oy, a major Finnish plywood producer. Forest industry is usually considered to be quite hierarchical. The importance of leveraging employee skills and knowledge has been recognized in all types of organizations – including those that mainly deal with tangible resources. However, the largest part of empirical knowledge management literature examines KM in so called knowledge-intensive or knowledge-based organizations. This study extends existing literature by providing an in depth case study into assessment and development of KM activities in these three organizations with little awareness of the KM discourse. This subject is analyzed through literature review, theoretical analysis and empirical research in the case organizations. The study also presents a structured method for evaluating KM activities of a company and for diagnosing the main weaknesses that should be developed in order to achieve KM excellence. The results help in understanding how knowledge management capability is constructed and provide insight into developing and exploiting it within an organization.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
Tutkimuksessa selvitetään yhden puupolttoaineorganisaation aineettoman pääoman keskeisimmät tekijät ja niiden välinen dynamiikka laadullisilla tutkimusmenetelmillä. Selvityksen teoreettisen taustan muodostaa tietojohtamisen kirjallisuus, jonka perusteella tietopääoman tekijät on jaettu kolmeen ryhmään: inhimillisiin, rakenne- ja suhdetekijöihin. Puupolttoaineiden liiketoiminta perustuu yksinkertaisen raaka-aineen eli puun hankkimiseen ja toimittamiseen voimalaitoksille eri muodoissa. Liiketoiminnan kannattavuus ja yrityksen asema markkinoilla pohjaa pääosin aineettoman pääoman laaja-alaiseen hyödyntämiseen ja kehittämiseen. Tutkimuksen kohteena on suuren metsäteollisuusyrityksen puunhankintaorganisaatio ja sen puupolttoaineiden liiketoimintayksikkö, jonka arvonmuodostusta ja aineettoman pääoman tekijöitä tutkimuksessa analysoidaan. Tutkimustulokset osoittavat, että puupolttoaineiden liiketoiminnan kannattavuus perustuu henkilöstön vahvaan osaamiseen ja organisaation dynaamisiin johtamismalleihin, joilla voidaan vahvistaa aineettomien tekijöiden yhteisvaikutusta. Haasteena puupolttoainetoiminnassa on aineettoman pääoman tekijöiden suhteuttaminen nopeasti kehittyviin operatiivisiin toimintamalleihin sekä valtiollisiin tukimekanismeihin.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.
Resumo:
In this paper we address the problem of extracting representative point samples from polygonal models. The goal of such a sampling algorithm is to find points that are evenly distributed. We propose star-discrepancy as a measure for sampling quality and propose new sampling methods based on global line distributions. We investigate several line generation algorithms including an efficient hardware-based sampling method. Our method contributes to the area of point-based graphics by extracting points that are more evenly distributed than by sampling with current algorithms
Resumo:
Among shrubland- and young forest-nesting bird species in North America, Golden-winged Warblers (Vermivora chrysoptera) are one of the most rapidly declining partly because of limited nesting habitat. Creation and management of high quality vegetation communities used for nesting are needed to reduce declines. Thus, we examined whether common characteristics could be managed across much of the Golden-winged Warbler’s breeding range to increase daily survival rate (DSR) of nests. We monitored 388 nests on 62 sites throughout Minnesota, Wisconsin, New York, North Carolina, Pennsylvania, Tennessee, and West Virginia. We evaluated competing DSR models in spatial-temporal (dominant vegetation type, population segment, state, and year), intraseasonal (nest stage and time-within-season), and vegetation model suites. The best-supported DSR models among the three model suites suggested potential associations between daily survival rate of nests and state, time-within-season, percent grass and Rubus cover within 1 m of the nest, and distance to later successional forest edge. Overall, grass cover (negative association with DSR above 50%) and Rubus cover (DSR lowest at about 30%) within 1 m of the nest and distance to later successional forest edge (negative association with DSR) may represent common management targets across our states for increasing Golden-winged Warbler DSR, particularly in the Appalachian Mountains population segment. Context-specific adjustments to management strategies, such as in wetlands or areas of overlap with Blue-winged Warblers (Vermivora cyanoptera), may be necessary to increase DSR for Golden-winged Warblers.
Resumo:
1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.
Resumo:
Bee pollinators are currently recorded with many different sampling methods. However, the relative performances of these methods have not been systematically evaluated and compared. In response to the strong need to record ongoing shifts in pollinator diversity and abundance, global and regional pollinator initiatives must adopt standardized sampling protocols when developing large-scale and long-term monitoring schemes. We systematically evaluated the performance of six sampling methods (observation plots, pan traps, standardized and variable transect walks, trap nests with reed internodes or paper tubes) that are commonly used across a wide range of geographical regions in Europe and in two habitat types (agricultural and seminatural). We focused on bees since they represent the most important pollinator group worldwide. Several characteristics of the methods were considered in order to evaluate their performance in assessing bee diversity: sample coverage, observed species richness, species richness estimators, collector biases (identified by subunit-based rarefaction curves), species composition of the samples, and the indication of overall bee species richness (estimated from combined total samples). The most efficient method in all geographical regions, in both the agricultural and seminatural habitats, was the pan trap method. It had the highest sample coverage, collected the highest number of species, showed negligible collector bias, detected similar species as the transect methods, and was the best indicator of overall bee species richness. The transect methods were also relatively efficient, but they had a significant collector bias. The observation plots showed poor performance. As trap nests are restricted to cavity-nesting bee species, they had a naturally low sample coverage. However, both trap nest types detected additional species that were not recorded by any of the other methods. For large-scale and long-term monitoring schemes with surveyors with different experience levels, we recommend pan traps as the most efficient, unbiased, and cost-effective method for sampling bee diversity. Trap nests with reed internodes could be used as a complementary sampling method to maximize the numbers of collected species. Transect walks are the principal method for detailed studies focusing on plant-pollinator associations. Moreover, they can be used in monitoring schemes after training the surveyors to standardize their collection skills.
Resumo:
A model for the structure of amorphous molybdenum trisulfide, a-MoS3, has been created using reverse Monte Carlo methods. This model, which consists of chains Of MoS6 units sharing three sulfurs with each of its two neighbors and forming alternate long, nonbonded, and short, bonded, Mo-Mo separations, is a good fit to the neutron diffraction data and is chemically and physically realistic. The paper identifies the limitations of previous models based on Mo-3 triangular clusters in accounting for the available experimental data.
Resumo:
Insect pollinators provide a critical ecosystem service by pollinating many wild flowers and crops. It is therefore essential to be able to effectively survey and monitor pollinator communities across a range of habitats, and in particular, sample the often stratified parts of the habitats where insects are found. To date, a wide array of sampling methods have been used to collect insect pollinators, but no single method has been used effectively to sample across habitat types and throughout the spatial structure of habitats. Here we present a method of ‘aerial pan-trapping’ that allows insect pollinators to be sampled across the vertical strata from the canopy of forests to agro-ecosystems. We surveyed and compared the species richness and abundance of a wide range of insect pollinators in agricultural, secondary regenerating forest and primary forest habitats in Ghana to evaluate the usefulness of this approach. In addition to confirming the efficacy of the method at heights of up to 30 metres and the effects of trap color on catch, we found greatest insect abundance in agricultural land and higher bee abundance and species richness in undisturbed forest compared to secondary forest.