38 resultados para Wood, Geoffrey B.: Sampling methods for multiresource forest inventory
Resumo:
Biomass allocation to above- and belowground compartments in trees is thought to be affected by growth conditions. To assess the strength of such influences, we sampled six Norway spruce forest stands growing at higher altitudes. Within these stands, we randomly selected a total of 77 Norway spruce trees and measured volume and biomass of stem, above- and belowground stump and all roots over 0.5 cm diameter. A comparison of our observations with models parameterised for lower altitudes shows that models developed for specific conditions may be applicable to other locations. Using our observations, we developed biomass functions (BF) and biomass conversion and expansion factors (BCEF) linking belowground biomass to stem parameters. While both BF and BCEF are accurate in belowground biomass predictions, using BCEF appears more promising as such factors can be readily used with existing forest inventory data to obtain estimates of belowground biomass stock. As an example, we show how BF and BCEF developed for individual trees can be used to estimate belowground biomass at the stand level. In combination with existing aboveground models, our observations can be used to quantify total standing biomass of high altitude Norway spruce stands.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.
Resumo:
This paper explores the relationship between national institutional archetypes and investments in training and development. A recent trend within the literature on comparative capitalism has been to explore the nature and extent of heterogeneity within the coordinated market economies (CMEs) of Europe. Based on a review of the existing comparative literature on training and development, and comparative firm-level survey evidence of differences in training and development practices, we both support and critique existing country clusters and argue for a more nuanced and flexible categorization.
Resumo:
Although there is now a sizeable body of academic literature that tries to explain cross-country differences in terms of corporate control, capital market development, investor protection and politics, there is as yet very little literature on the degrees of protection accorded to other corporate stakeholders such as employees, based on a systematic comparison of firm level evidence. We find that both theories of legal origin and the varieties of capitalism approach are poor predictors of the relative propensity of firms to make redundancies in different settings. However, the political orientation of the government in place and even more so the nature of the electoral system are relatively good explanators of this propensity. In other words, political structures and outcomes matter more than more rigid institutional features such as legal origin. We explore the reasons for this, drawing out the implications for both theory and practice.
Resumo:
Long-term monitoring of forest soils as part of a pan-European network to detect environmental change depends on an accurate determination of the mean of the soil properties at each monitoring event. Forest soil is known to be very variable spatially, however. A study was undertaken to explore and quantify this variability at three forest monitoring plots in Britain. Detailed soil sampling was carried out, and the data from the chemical analyses were analysed by classical statistics and geostatistics. An analysis of variance showed that there were no consistent effects from the sample sites in relation to the position of the trees. The variogram analysis showed that there was spatial dependence at each site for several variables and some varied in an apparently periodic way. An optimal sampling analysis based on the multivariate variogram for each site suggested that a bulked sample from 36 cores would reduce error to an acceptable level. Future sampling should be designed so that it neither targets nor avoids trees and disturbed ground. This can be achieved best by using a stratified random sampling design.