899 resultados para two-Gaussian mixture model
Resumo:
Site index prediction models are an important aid for forest management and planning activities. This paper introduces a multiple regression model for spatially mapping and comparing site indices for two Pinus species (Pinus elliottii Engelm. and Queensland hybrid, a P. elliottii x Pinus caribaea Morelet hybrid) based on independent variables derived from two major sources: g-ray spectrometry (potassium (K), thorium (Th), and uranium (U)) and a digital elevation model (elevation, slope, curvature, hillshade, flow accumulation, and distance to streams). In addition, interpolated rainfall was tested. Species were coded as a dichotomous dummy variable; interaction effects between species and the g-ray spectrometric and geomorphologic variables were considered. The model explained up to 60% of the variance of site index and the standard error of estimate was 1.9 m. Uranium, elevation, distance to streams, thorium, and flow accumulation significantly correlate to the spatial variation of the site index of both species, and hillshade, curvature, elevation and slope accounted for the extra variability of one species over the other. The predicted site indices varied between 20.0 and 27.3 m for P. elliottii, and between 23.1 and 33.1 m for Queensland hybrid; the advantage of Queensland hybrid over P. elliottii ranged from 1.8 to 6.8 m, with the mean at 4.0 m. This compartment-based prediction and comparison study provides not only an overview of forest productivity of the whole plantation area studied but also a management tool at compartment scale.
Resumo:
Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.
Resumo:
We compared daily net radiation (Rn) estimates from 19 methods with the ASCE-EWRI Rn estimates in two climates: Clay Center, Nebraska (sub-humid) and Davis, California (semi-arid) for the calendar year. The performances of all 20 methods, including the ASCE-EWRI Rn method, were then evaluated against Rn data measured over a non-stressed maize canopy during two growing seasons in 2005 and 2006 at Clay Center. Methods differ in terms of inputs, structure, and equation intricacy. Most methods differ in estimating the cloudiness factor, emissivity (e), and calculating net longwave radiation (Rnl). All methods use albedo (a) of 0.23 for a reference grass/alfalfa surface. When comparing the performance of all 20 Rn methods with measured Rn, we hypothesized that the a values for grass/alfalfa and non-stressed maize canopy were similar enough to only cause minor differences in Rn and grass- and alfalfa-reference evapotranspiration (ETo and ETr) estimates. The measured seasonal average a for the maize canopy was 0.19 in both years. Using a = 0.19 instead of a = 0.23 resulted in 6% overestimation of Rn. Using a = 0.19 instead of a = 0.23 for ETo and ETr estimations, the 6% difference in Rn translated to only 4% and 3% differences in ETo and ETr, respectively, supporting the validity of our hypothesis. Most methods had good correlations with the ASCE-EWRI Rn (r2 > 0.95). The root mean square difference (RMSD) was less than 2 MJ m-2 d-1 between 12 methods and the ASCE-EWRI Rn at Clay Center and between 14 methods and the ASCE-EWRI Rn at Davis. The performance of some methods showed variations between the two climates. In general, r2 values were higher for the semi-arid climate than for the sub-humid climate. Methods that use dynamic e as a function of mean air temperature performed better in both climates than those that calculate e using actual vapor pressure. The ASCE-EWRI-estimated Rn values had one of the best agreements with the measured Rn (r2 = 0.93, RMSD = 1.44 MJ m-2 d-1), and estimates were within 7% of the measured Rn. The Rn estimates from six methods, including the ASCE-EWRI, were not significantly different from measured Rn. Most methods underestimated measured Rn by 6% to 23%. Some of the differences between measured and estimated Rn were attributed to the poor estimation of Rnl. We conducted sensitivity analyses to evaluate the effect of Rnl on Rn, ETo, and ETr. The Rnl effect on Rn was linear and strong, but its effect on ETo and ETr was subsidiary. Results suggest that the Rn data measured over green vegetation (e.g., irrigated maize canopy) can be an alternative Rn data source for ET estimations when measured Rn data over the reference surface are not available. In the absence of measured Rn, another alternative would be using one of the Rn models that we analyzed when all the input variables are not available to solve the ASCE-EWRI Rn equation. Our results can be used to provide practical information on which method to select based on data availability for reliable estimates of daily Rn in climates similar to Clay Center and Davis.
Resumo:
In this article, we describe and compare two individual-based models constructed to investigate how genetic factors influence the development of phosphine resistance in lesser grain borer (R. dominica). One model is based on the simplifying assumption that resistance is conferred by alleles at a single locus, while the other is based on the more realistic assumption that resistance is conferred by alleles at two separate loci. We simulated the population dynamic of R. dominica in the absence of phosphine fumigation, and under high and low dose phosphine treatments, and found important differences between the predictions of the two models in all three cases. In the absence of fumigation, starting from the same initial frequencies of genotypes, the two models tended to different stable frequencies, although both reached Hardy-Weinberg equilibrium. The one-locus model exaggerated the equilibrium proportion of strongly resistant beetles by 3.6 times, compared to the aggregated predictions of the two-locus model. Under a low dose treatment the one-locus model overestimated the proportion of strongly resistant individuals within the population and underestimated the total population numbers compared to the two-locus model. These results show the importance of basing resistance evolution models on realistic genetics and that using oversimplified one-locus models to develop pest control strategies runs the risk of not correctly identifying tactics to minimise the incidence of pest infestation.
Resumo:
A kinetic model has been developed for the bulk polymerization of vinyl chloride using Talamini's hypothesis of two-phase polymerization and a new concept of kinetic solubility which assumes that rapidly growing polymer chains have considerably greater solubility than the thermodynamic solubility of preformed polymer molecules of the same size and so can remain in solution even under thermodynamically unfavourable conditions. It is further assumed that this kinetic solubility is a function of chain length. The model yields a rate expression consistent with the experimental data for vinyl chloride bulk polymerization and moreover is able to explain several characteristic kinetic features of this system. Application of the model rate expression to the available rate data has yielded 2.36 × 108l mol−1 sec−1 for the termination rate constant in the polymer-rich phase; as expected, this value is smaller than that reported for homogenous polymerization by a factor of 10–30.
Resumo:
A numerical modelling technique for predicting the detailed performance of a double-inlet type two-stage pulse tube refrigerator has been developed. The pressure variations in the compressor, pulse tube, and reservoir were derived, assuming the stroke volume variation of the compressor to be sinusoidal. The relationships of mass flowrates, volume flowrates, and temperature as a function of time and position were developed. The predicted refrigeration powers are calculated by considering the effect of void volumes and the phase shift between pressure and mass flowrate. These results are compared with the experimental results of a specific pulse tube refrigerator configuration and an existing theoretical model. The analysis shows that the theoretical predictions are in good agreement with each other.
Resumo:
This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.
Resumo:
The self-diffusion properties of pure CH4 and its binary mixture with CO2 within MY zeolite have been investigated by combining an experimental quasi-elastic neutron scattering (QENS) technique and classical Molecular dynamics simulations. The QENS measurements carried out at 200 K led to an unexpected self-diffusivity profile for Pure CH4 with the presence of a maximum for a loading of 32 CH4/unit cell, which was never observed before for the diffusion of apolar species in azeolite system With large windows. Molecular dynamics simulations were performed using two distinct microscopic models for representing the CH4/NaY interactions. Depending on the model, we are able to fairly reproduce either the magnitude or the profile of the self-diffusivity.Further analysis allowed LIS to provide some molecular insight into the diffusion mechanism in play. The QENS measurements report only a slight decrease of the self-diffusivity of CH4 in the presence of CO2 when the CO2 loading increases. Molecular dynamics simulations successfully capture this experimental trend and suggest a plausible microscopic diffusion mechanism in the case of this binary mixture.
Resumo:
Over the past two decades, the selection, optimization, and compensation (SOC) model has been applied in the work context to investigate antecedents and outcomes of employees' use of action regulation strategies. We systematically review, meta-analyze, and critically discuss the literature on SOC strategy use at work and outline directions for future research and practice. The systematic review illustrates the breadth of constructs that have been studied in relation to SOC strategy use, and that SOC strategy use can mediate and moderate relationships of person and contextual antecedents with work outcomes. Results of the meta-analysis show that SOC strategy use is positively related to age (rc = .04), job autonomy (rc = .17), self-reported job performance (rc = .23), non-self-reported job performance (rc = .21), job satisfaction (rc = .25), and job engagement (rc = .38), whereas SOC strategy use is not significantly related to job tenure, job demands, and job strain. Overall, our findings underline the importance of the SOC model for the work context, and they also suggest that its measurement and reporting standards need to be improved to become a reliable guide for future research and organizational practice.
Resumo:
The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.
Resumo:
The Internet has made possible the cost-effective dissemination of scientific journals in the form of electronic versions, usually in parallel with the printed versions. At the same time the electronic medium also makes possible totally new open access (OA) distribution models, funded by author charges, sponsorship, advertising, voluntary work, etc., where the end product is free in full text to the readers. Although more than 2,000 new OA journals have been founded in the last 15 years, the uptake of open access has been rather slow, with currently around 5% of all peer-reviewed articles published in OA journals. The slow growth can to a large extent be explained by the fact that open access has predominantly emerged via newly founded journals and startup publishers. Established journals and publishers have not had strong enough incentives to change their business models, and the commercial risks in doing so have been high. In this paper we outline and discuss two different scenarios for how scholarly publishers could change their operating model to open access. The first is based on an instantaneous change and the second on a gradual change. We propose a way to manage the gradual change by bundling traditional “big deal” licenses and author charges for opening access to individual articles.
Resumo:
Constellation Constrained (CC) capacity regions of a two-user Gaussian Multiple Access Channel(GMAC) have been recently reported. For such a channel, code pairs based on trellis coded modulation are proposed in this paper with MPSK and M-PAM alphabet pairs, for arbitrary values of M,toachieve sum rates close to the CC sum capacity of the GMAC. In particular, the structure of the sum alphabets of M-PSK and M-PAMmalphabet pairs are exploited to prove that, for certain angles of rotation between the alphabets, Ungerboeck labelling on the trellis of each user maximizes the guaranteed squared Euclidean distance of the sum trellis. Hence, such a labelling scheme can be used systematically,to construct trellis code pairs to achieve sum rates close to the CC sum capacity. More importantly, it is shown for the first time that ML decoding complexity at the destination is significantly reduced when M-PAM alphabet pairs are employed with almost no loss in the sum capacity.
Resumo:
A two-dimensional model is proposed for taking into account the establishment of contact on the compression side of crack faces in plates under bending. An approximate but simple method is developed for evaluating reduction of stress intensity factor due to such ‘crack closure’. Analysis is first carried out permitting interference of the crack faces. Contact forces are then introduced on the crack faces and their magnitudes determined from the consideration that the interference is just eliminated. The method is based partly on finite element analysis and partly on a continuum analysis using Irwin's solution for point loads on the crack line.