951 resultados para Covariance matrix estimation
Resumo:
One of the applications of nanomaterials is as reinforcements in composites, wherein small additions of nanomaterials lead to large enhancements in mechanical properties. There have been extensive studies in the literature on composites where a polymer matrix is reinforced by a single nanomaterial such as carbon nanotubes. In this article, we examine the significant synergistic effects observed when 2 different types of nanocarbons are incorporated in a polymer matrix. Thus, binary combinations of nanodiamond, few-layer graphene, and single-walled nanotubes have been used to reinforce polyvinyl alcohol. The mechanical properties of the resulting composites, evaluated by the nanoindentation technique, show extraordinary synergy, improving the stiffness and hardness by as much as 400% compared to those obtained with single nanocarbon reinforcements. These results suggest a way of designing advanced materials with extraordinary mechanical properties by incorporating small amounts of 2 nanomaterials such as graphene plus nanodiamond or nanodiamond plus carbon nanotube.
Resumo:
There is an increasing need to compare the results obtained with different methods of estimation of tree biomass in order to reduce the uncertainty in the assessment of forest biomass carbon. In this study, tree biomass was investigated in a 30-year-old Scots pine (Pinus sylvestris) (Young-Stand) and a 130-year-old mixed Norway spruce (Picea abies)-Scots pine stand (Mature-Stand) located in southern Finland (61º50' N, 24º22' E). In particular, a comparison of the results of different estimation methods was conducted to assess the reliability and suitability of their applications. For the trees in Mature-Stand, annual stem biomass increment fluctuated following a sigmoid equation, and the fitting curves reached a maximum level (from about 1 kg/yr for understorey spruce to 7 kg/yr for dominant pine) when the trees were 100 years old. Tree biomass was estimated to be about 70 Mg/ha in Young-Stand and about 220 Mg/ha in Mature-Stand. In the region (58.00-62.13 ºN, 14-34 ºE, ≤ 300 m a.s.l.) surrounding the study stands, the tree biomass accumulation in Norway spruce and Scots pine stands followed a sigmoid equation with stand age, with a maximum of 230 Mg/ha at the age of 140 years. In Mature-Stand, lichen biomass on the trees was 1.63 Mg/ha with more than half of the biomass occurring on dead branches, and the standing crop of litter lichen on the ground was about 0.09 Mg/ha. There were substantial differences among the results estimated by different methods in the stands. These results imply that a possible estimation error should be taken into account when calculating tree biomass in a stand with an indirect approach.
Resumo:
This thesis examines the feasibility of a forest inventory method based on two-phase sampling in estimating forest attributes at the stand or substand levels for forest management purposes. The method is based on multi-source forest inventory combining auxiliary data consisting of remote sensing imagery or other geographic information and field measurements. Auxiliary data are utilized as first-phase data for covering all inventory units. Various methods were examined for improving the accuracy of the forest estimates. Pre-processing of auxiliary data in the form of correcting the spectral properties of aerial imagery was examined (I), as was the selection of aerial image features for estimating forest attributes (II). Various spatial units were compared for extracting image features in a remote sensing aided forest inventory utilizing very high resolution imagery (III). A number of data sources were combined and different weighting procedures were tested in estimating forest attributes (IV, V). Correction of the spectral properties of aerial images proved to be a straightforward and advantageous method for improving the correlation between the image features and the measured forest attributes. Testing different image features that can be extracted from aerial photographs (and other very high resolution images) showed that the images contain a wealth of relevant information that can be extracted only by utilizing the spatial organization of the image pixel values. Furthermore, careful selection of image features for the inventory task generally gives better results than inputting all extractable features to the estimation procedure. When the spatial units for extracting very high resolution image features were examined, an approach based on image segmentation generally showed advantages compared with a traditional sample plot-based approach. Combining several data sources resulted in more accurate estimates than any of the individual data sources alone. The best combined estimate can be derived by weighting the estimates produced by the individual data sources by the inverse values of their mean square errors. Despite the fact that the plot-level estimation accuracy in two-phase sampling inventory can be improved in many ways, the accuracy of forest estimates based mainly on single-view satellite and aerial imagery is a relatively poor basis for making stand-level management decisions.
Resumo:
Background A pandemic strain of influenza A spread rapidly around the world in 2009, now referred to as pandemic (H1N1) 2009. This study aimed to examine the spatiotemporal variation in the transmission rate of pandemic (H1N1) 2009 associated with changes in local socio-environmental conditions from May 7–December 31, 2009, at a postal area level in Queensland, Australia. Method We used the data on laboratory-confirmed H1N1 cases to examine the spatiotemporal dynamics of transmission using a flexible Bayesian, space–time, Susceptible-Infected-Recovered (SIR) modelling approach. The model incorporated parameters describing spatiotemporal variation in H1N1 infection and local socio-environmental factors. Results The weekly transmission rate of pandemic (H1N1) 2009 was negatively associated with the weekly area-mean maximum temperature at a lag of 1 week (LMXT) (posterior mean: −0.341; 95% credible interval (CI): −0.370–−0.311) and the socio-economic index for area (SEIFA) (posterior mean: −0.003; 95% CI: −0.004–−0.001), and was positively associated with the product of LMXT and the weekly area-mean vapour pressure at a lag of 1 week (LVAP) (posterior mean: 0.008; 95% CI: 0.007–0.009). There was substantial spatiotemporal variation in transmission rate of pandemic (H1N1) 2009 across Queensland over the epidemic period. High random effects of estimated transmission rates were apparent in remote areas and some postal areas with higher proportion of indigenous populations and smaller overall populations. Conclusions Local SEIFA and local atmospheric conditions were associated with the transmission rate of pandemic (H1N1) 2009. The more populated regions displayed consistent and synchronized epidemics with low average transmission rates. The less populated regions had high average transmission rates with more variations during the H1N1 epidemic period.
Resumo:
Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.
Resumo:
Fisheries managers are becoming increasingly aware of the need to quantify all forms of harvest, including that by recreational fishers. This need has been driven by both a growing recognition of the potential impact that noncommercial fishers can have on exploited resources and the requirement to allocate catch limits between different sectors of the wider fishing community in many jurisdictions. Marine recreational fishers are rarely required to report any of their activity, and some form of survey technique is usually required to estimate levels of recreational catch and effort. In this review, we describe and discuss studies that have attempted to estimate the nature and extent of recreational harvests of marine fishes in New Zealand and Australia over the past 20 years. We compare studies by method to show how circumstances dictate their application and to highlight recent developments that other researchers may find of use. Although there has been some convergence of approach, we suggest that context is an important consideration, and many of the techniques discussed here have been adapted to suit local conditions and to address recognized sources of bias. Much of this experience, along with novel improvements to existing approaches, have been reported only in "gray" literature because of an emphasis on providing estimates for immediate management purposes. This paper brings much of that work together for the first time, and we discuss how others might benefit from our experience.
Resumo:
It is shown how the single-site coherent potential approximation and the averaged T-matrix approximation become exact in the calculation of the averaged single-particle Green function of the electron in the Anderson model when the site energy is distributed randomly with lorentzian distribution. Using these approximations, Lloyd's exact result is reproduced.
Resumo:
NeEstimator v2 is a completely revised and updated implementation of software that produces estimates of contemporary effective population size, using several different methods and a single input file. NeEstimator v2 includes three single-sample estimators (updated versions of the linkage disequilibrium and heterozygote-excess methods, and a new method based on molecular coancestry), as well as the two-sample (moment-based temporal) method. New features include the following: (i) an improved method for accounting for missing data; (ii) options for screening out rare alleles; (iii) confidence intervals for all methods; (iv) the ability to analyse data sets with large numbers of genetic markers (10000 or more); (v) options for batch processing large numbers of different data sets, which will facilitate cross-method comparisons using simulated data; and (vi) correction for temporal estimates when individuals sampled are not removed from the population (Plan I sampling). The user is given considerable control over input data and composition, and format of output files. The freely available software has a new JAVA interface and runs under MacOS, Linux and Windows.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual’s previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag–recapture data and tag–recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
Terrain traversability estimation is a fundamental requirement to ensure the safety of autonomous planetary rovers and their ability to conduct long-term missions. This paper addresses two fundamental challenges for terrain traversability estimation techniques. First, representations of terrain data, which are typically built by the rover’s onboard exteroceptive sensors, are often incomplete due to occlusions and sensor limitations. Second, during terrain traversal, the rover-terrain interaction can cause terrain deformation, which may significantly alter the difficulty of traversal. We propose a novel approach built on Gaussian process (GP) regression to learn, and consequently to predict, the rover’s attitude and chassis configuration on unstructured terrain using terrain geometry information only. First, given incomplete terrain data, we make an initial prediction under the assumption that the terrain is rigid, using a learnt kernel function. Then, we refine this initial estimate to account for the effects of potential terrain deformation, using a near-to-far learning approach based on multitask GP regression. We present an extensive experimental validation of the proposed approach on terrain that is mostly rocky and whose geometry changes as a result of loads from rover traversals. This demonstrates the ability of the proposed approach to accurately predict the rover’s attitude and configuration in partially occluded and deformable terrain.
Resumo:
Data-driven approaches such as Gaussian Process (GP) regression have been used extensively in recent robotics literature to achieve estimation by learning from experience. To ensure satisfactory performance, in most cases, multiple learning inputs are required. Intuitively, adding new inputs can often contribute to better estimation accuracy, however, it may come at the cost of a new sensor, larger training dataset and/or more complex learning, some- times for limited benefits. Therefore, it is crucial to have a systematic procedure to determine the actual impact each input has on the estimation performance. To address this issue, in this paper we propose to analyse the impact of each input on the estimate using a variance-based sensitivity analysis method. We propose an approach built on Analysis of Variance (ANOVA) decomposition, which can characterise how the prediction changes as one or more of the input changes, and also quantify the prediction uncertainty as attributed from each of the inputs in the framework of dependent inputs. We apply the proposed approach to a terrain-traversability estimation method we proposed in prior work, which is based on multi-task GP regression, and we validate this implementation experimentally using a rover on a Mars-analogue terrain.
Resumo:
We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.
Resumo:
In this paper, we examine approaches to estimate a Bayesian mixture model at both single and multiple time points for a sample of actual and simulated aerosol particle size distribution (PSD) data. For estimation of a mixture model at a single time point, we use Reversible Jump Markov Chain Monte Carlo (RJMCMC) to estimate mixture model parameters including the number of components which is assumed to be unknown. We compare the results of this approach to a commonly used estimation method in the aerosol physics literature. As PSD data is often measured over time, often at small time intervals, we also examine the use of an informative prior for estimation of the mixture parameters which takes into account the correlated nature of the parameters. The Bayesian mixture model offers a promising approach, providing advantages both in estimation and inference.
Resumo:
We have used the density matrix renormalization group (DMRG) method to study the linear and nonlinear optical responses of first generation nitrogen based dendrimers with donor acceptor groups. We have employed Pariser–Parr–Pople Hamiltonian to model the interacting pi electrons in these systems. Within the DMRG method we have used an innovative scheme to target excited states with large transition dipole to the ground state. This method reproduces exact optical gaps and polarization in systems where exact diagonalization of the Hamiltonian is possible. We have used a correction vector method which tacitly takes into account the contribution of all excited states, to obtain the ground state polarizibility, first hyperpolarizibility, and two photon absorption cross sections. We find that the lowest optical excitations as well as the lowest excited triplet states are localized. It is interesting to note that the first hyperpolarizibility saturates more rapidly with system size compared to linear polarizibility unlike that of linear polyenes.