6 resultados para Agent Tree

em Greenwich Academic Literature Archive - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data from three forest sites in Sumatra (Batang Ule, Pasirmayang and Tebopandak) have been analysed and compared for the effects of sample area cut-off, and tree diameter cut-off. An 'extended inverted exponential model' is shown to be well suited to fitting tree-species-area curves. The model yields species carrying capacities of 680 for Batang Ule, 380 species for Pasirmayang, and 35 for Tebopandak (tree diameter >10cm). It would seem that in terms of species carrying capacity, Tebopandak and Pasirmayang are rather similar, and both less diverse than the hilly Batang Ule site. In terms of conservation policy, this would mean that rather more emphasis should be put on conserving hilly sites on a granite substratum. For Pasirmayang with tree diameter >3cm, the asymptotic species number estimate is 567, considerably higher than the estimate of 387 species for trees with diameter >10cm. It is clear that the diameter cut-off has a major impact on the estimate of the species carrying capacity. A conservative estimate of the total number of tree species in the Pasirmayang region is 632 species! In sampling exercises, the diameter cut-off should not be chosen lightly, and it may be worth adopting field sampling procedures which involve some subsampling of the primary sample area, where the diameter cut-off is set much lower than in the primary plots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The SB distributional model of Johnson's 1949 paper was introduced by a transformation to normality, that is, z ~ N(0, 1), consisting of a linear scaling to the range (0, 1), a logit transformation, and an affine transformation, z = γ + δu. The model, in its original parameterization, has often been used in forest diameter distribution modelling. In this paper, we define the SB distribution in terms of the inverse transformation from normality, including an initial linear scaling transformation, u = γ′ + δ′z (δ′ = 1/δ and γ′ = �γ/δ). The SB model in terms of the new parameterization is derived, and maximum likelihood estimation schema are presented for both model parameterizations. The statistical properties of the two alternative parameterizations are compared empirically on 20 data sets of diameter distributions of Changbai larch (Larix olgensis Henry). The new parameterization is shown to be statistically better than Johnson's original parameterization for the data sets considered here.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Johnson's SB distribution is a four-parameter distribution that is transformed into a normal distribution by a logit transformation. By replacing the normal distribution of Johnson's SB with the logistic distribution, we obtain a new distributional model that approximates SB. It is analytically tractable, and we name it the "logitlogistic" (LL) distribution. A generalized four-parameter Weibull model and the Burr XII model are also introduced for comparison purposes. Using the distribution "shape plane" (with axes skew and kurtosis) we compare the "coverage" properties of the LL, the generalized Weibull, and the Burr XII with Johnson's SB, the beta, and the three-parameter Weibull, the main distributions used in forest modelling. The LL is found to have the largest range of shapes. An empirical case study of the distributional models is conducted on 107 sample plots of Chinese fir. The LL performs best among the four-parameter models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Logit-Logistic (LL), Johnson's SB, and the Beta (GBD) are flexible four-parameter probability distribution models in terms of the (skewness-kurtosis) region covered, and each has been used for modeling tree diameter distributions in forest stands. This article compares bivariate forms of these models in terms of their adequacy in representing empirical diameter-height distributions from 102 sample plots. Four bivariate models are compared: SBB, the natural, well-known, and much-used bivariate generalization of SB; the bivariate distributions with LL, SB, and Beta as marginals, constructed using Plackett's method (LL-2P, etc.). All models are fitted using maximum likelihood, and their goodness-of-fits are compared using minus log-likelihood (equivalent to Akaike's Information Criterion, the AIC). The performance ranking in this case study was SBB, LL-2P, GBD-2P, and SB-2P

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an investigation into applying Case-Based Reasoning to Multiple Heterogeneous Case Bases using agents. The adaptive CBR process and the architecture of the system are presented. A case study is presented to illustrate and evaluate the approach. The process of creating and maintaining the dynamic data structures is discussed. The similarity metrics employed by the system are used to support the process of optimisation of the collaboration between the agents which is based on the use of a blackboard architecture. The blackboard architecture is shown to support the efficient collaboration between the agents to achieve an efficient overall CBR solution, while using case-based reasoning methods to allow the overall system to adapt and “learn” new collaborative strategies for achieving the aims of the overall CBR problem solving process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development ­Environments (DECADE). A brief discussion sets the background for IoT, and the development of the ­distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, ­local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and ­quantitative ­analysis ­carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service ­architecture, ­combining a distributed data warehouse, web services for analysis agents, ontology agents and a ­verification engine, with a centrally verified outcome database maintained by certifying body for qualification/­professional status.