960 resultados para Kernel density estimates


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban encroachment on dense, coastal koala populations has ensured that their management has received increasing government and public attention. The recently developed National Koala Conservation Strategy calls for maintenance of viable populations in the wild. Yet the success of this, and other, conservation initiatives is hampered by lack of reliable and generally accepted national and regional population estimates. In this paper we address this problem in a potentially large, but poorly studied, regional population in the State that is likely to have the largest wild populations. We draw on findings from previous reports in this series and apply the faecal standing-crop method (FSCM) to derive a regional estimate of more than 59 000 individuals. Validation trials in riverine communities showed that estimates of animal density obtained from the FSCM and direct observation were in close agreement. Bootstrapping and Monte Carlo simulations were used to obtain variance estimates for our population estimates in different vegetation associations across the region. The most favoured habitat was riverine vegetation, which covered only 0.9% of the region but supported 45% of the koalas. We also estimated that between 1969 and 1995 similar to 30% of the native vegetation associations that are considered as potential koala habitat were cleared, leading to a decline of perhaps 10% in koala numbers. Management of this large regional population has significant implications for the national conservation of the species: the continued viability of this population is critically dependent on the retention and management of riverine and residual vegetation communities, and future vegetation-management guidelines should be cognisant of the potential impacts of clearing even small areas of critical habitat. We also highlight eight management implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the United States and several other countries., the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including. a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to. model disease explicitly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adsorption of argon and nitrogen at their respective boiling points in cylindrical pores of MCM-41 type silica-like adsorbents is studied by means of a non-local density functional theory (NLDFT), which is modified to deal with amorphous solids. By matching the theoretical results of the pore filling pressure versus pore diameter against the experimental data, we arrive at a conclusion that the adsorption branch (rather than desorption) corresponds to the true thermodynamic equilibrium. If this is accepted, we derive the optimal values for the solid–fluid molecular parameters for the system amorphous silica–Ar and amorphous silica–N2, and at the same time we could derive reliably the specific surface area of non-porous and mesoporous silica-like adsorbents, without a recourse to the BET method. This method is then logically extended to describe the local adsorption isotherms of argon and nitrogen in silica-like pores, which are then used as the bases (kernel) to determine the pore size distribution. We test this with a number of adsorption isotherms on the MCM-41 samples, and the results are quite realistic and in excellent agreement with the XRD results, justifying the approach adopted in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This technical report contains all technical information and results from experiments where Mixture Density Networks (MDN) using an RBF network and fixed kernel means and variances were used to infer the wind direction from satellite data from the ersII weather satellite. The regularisation is based on the evidence framework and three different approximations were used to estimate the regularisation parameter. The results were compared with the results by `early stopping'.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we use the quantum Jensen-Shannon divergence as a means of measuring the information theoretic dissimilarity of graphs and thus develop a novel graph kernel. In quantum mechanics, the quantum Jensen-Shannon divergence can be used to measure the dissimilarity of quantum systems specified in terms of their density matrices. We commence by computing the density matrix associated with a continuous-time quantum walk over each graph being compared. In particular, we adopt the closed form solution of the density matrix introduced in Rossi et al. (2013) [27,28] to reduce the computational complexity and to avoid the cumbersome task of simulating the quantum walk evolution explicitly. Next, we compare the mixed states represented by the density matrices using the quantum Jensen-Shannon divergence. With the quantum states for a pair of graphs described by their density matrices to hand, the quantum graph kernel between the pair of graphs is defined using the quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets from both bioinformatics and computer vision. The experimental results demonstrate the effectiveness of the proposed quantum graph kernel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we develop a new graph kernel by using the quantum Jensen-Shannon divergence and the discrete-time quantum walk. To this end, we commence by performing a discrete-time quantum walk to compute a density matrix over each graph being compared. For a pair of graphs, we compare the mixed quantum states represented by their density matrices using the quantum Jensen-Shannon divergence. With the density matrices for a pair of graphs to hand, the quantum graph kernel between the pair of graphs is defined by exponentiating the negative quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets, and demonstrate the effectiveness of the new kernel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we use the quantum Jensen-Shannon divergence as a means to establish the similarity between a pair of graphs and to develop a novel graph kernel. In quantum theory, the quantum Jensen-Shannon divergence is defined as a distance measure between quantum states. In order to compute the quantum Jensen-Shannon divergence between a pair of graphs, we first need to associate a density operator with each of them. Hence, we decide to simulate the evolution of a continuous-time quantum walk on each graph and we propose a way to associate a suitable quantum state with it. With the density operator of this quantum state to hand, the graph kernel is defined as a function of the quantum Jensen-Shannon divergence between the graph density operators. We evaluate the performance of our kernel on several standard graph datasets from bioinformatics. We use the Principle Component Analysis (PCA) on the kernel matrix to embed the graphs into a feature space for classification. The experimental results demonstrate the effectiveness of the proposed approach. © 2013 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kernel methods provide a way to apply a wide range of learning techniques to complex and structured data by shifting the representational problem from one of finding an embedding of the data to that of defining a positive semidefinite kernel. In this paper, we propose a novel kernel on unattributed graphs where the structure is characterized through the evolution of a continuous-time quantum walk. More precisely, given a pair of graphs, we create a derived structure whose degree of symmetry is maximum when the original graphs are isomorphic. With this new graph to hand, we compute the density operators of the quantum systems representing the evolutions of two suitably defined quantum walks. Finally, we define the kernel between the two original graphs as the quantum Jensen-Shannon divergence between these two density operators. The experimental evaluation shows the effectiveness of the proposed approach. © 2013 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Antarctic Pack Ice Seal (APIS) Program was initiated in 1994 to estimate the abundance of four species of Antarctic phocids: the crabeater seal Lobodon carcinophaga, Weddell seal Leptonychotes weddellii, Ross seal Ommatophoca rossii and leopard seal Hydrurga leptonyx and to identify ecological relationships and habitat use patterns. The Atlantic sector of the Southern Ocean (the eastern sector of the Weddell Sea) was surveyed by research teams from Germany, Norway and South Africa using a range of aerial methods over five austral summers between 1996-1997 and 2000-2001. We used these observations to model densities of seals in the area, taking into account haul-out probabilities, survey-specific sighting probabilities and covariates derived from satellite-based ice concentrations and bathymetry. These models predicted the total abundance over the area bounded by the surveys (30°W and 10°E). In this sector of the coast, we estimated seal abundances of: 514 (95 % CI 337-886) x 10**3 crabeater seals, 60.0 (43.2-94.4) x 10**3 Weddell seals and 13.2 (5.50-39.7) x 10**3 leopard seals. The crabeater seal densities, approximately 14,000 seals per degree longitude, are similar to estimates obtained by surveys in the Pacific and Indian sectors by other APIS researchers. Very few Ross seals were observed (24 total), leading to a conservative estimate of 830 (119-2894) individuals over the study area. These results provide an important baseline against which to compare future changes in seal distribution and abundance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The distribution of sources and sinks of carbon over the land surface is dominated by changes in land use such as deforestation, reforestation, and agricultural management. Despite, the importance of land-use change in dominating long-term net terrestrial fluxes of carbon, estimates of the annual flux are uncertain relative to other terms in the global carbon budget. The interaction of the nitrogen cycle via atmospheric N inputs and N limitation with the carbon cycle contributes to the uncertain effect of land use change on terrestrial carbon uptake. This study uses two different land use datasets to force the geographically explicit terrestrial carbon-nitrogen coupled component of the Integrated Science Assessment Model (ISAM) to examine the response of terrestrial carbon stocks to historical LCLUC (cropland, pastureland and wood harvest) while accounting for changes in N deposition, atmospheric CO2 and climate. One of the land use datasets is based on satellite data (SAGE) while the other uses population density maps (HYDE), which allows this study to investigate how global LCLUC data construction can affect model estimated emissions. The timeline chosen for this study starts before the Industrial Revolution in 1765 to the year 2000 because of the influence of rising population and economic development on regional LCLUC. Additionally, this study evaluates the impact that resulting secondary forests may have on terrestrial carbon uptake. The ISAM model simulations indicate that uncertainties in net terrestrial carbon fluxes during the 1990s are largely due to uncertainties in regional LCLUC data. Also results show that secondary forests increase the terrestrial carbon sink but secondary tropical forests carbon uptake are constrained due to nutrient limitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eucalyptus pellita demonstrated good growth and wood quality traits in this study, with young plantation grown timber being suitable for both solid and pulp wood products. All traits examined were under moderate levels of genetic control with little genotype by environment interaction when grown on two contrasting sites in Vietnam. Eucalyptus pellita currently has a significant role in reforestation in the tropics. Research to support expanded of use of this species is needed: particularly, research to better understand the genetic control of key traits will facilitate the development of genetically improved planting stock. This study aimed to provide estimates of the heritability of diameter at breast height over bark, wood basic density, Kraft pulp yield, modulus of elasticity and microfibril angle, and the genetic correlations among these traits, and understand the importance of genotype by environment interactions in Vietnam. Data for diameter and wood properties were collected from two 10-year-old, open-pollinated progeny trials of E. pellita in Vietnam that evaluated 104 families from six native range and three orchard sources. Wood properties were estimated from wood samples using near-infrared (NIR) spectroscopy. Data were analysed using mixed linear models to estimate genetic parameters (heritability, proportion of variance between seed sources and genetic correlations). Variation among the nine sources was small compared to additive variance. Narrow-sense heritability and genetic correlation estimates indicated that simultaneous improvements in most traits could be achieved from selection among and within families as the genetic correlations among traits were either favourable or close to zero. Type B genetic correlations approached one for all traits suggesting that genotype by environment interactions were of little importance. These results support a breeding strategy utilizing a single breeding population advanced by selecting the best individuals across all seed sources. Both growth and wood properties have been evaluated. Multi-trait selection for growth and wood property traits will lead to more productive populations of E. pellita both with improved productivity and improved timber and pulp properties.