895 resultados para Two-Phase Regression
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
A model to predict the buildup of mainly traffic-generated volatile organic compounds or VOCs (toluene, ethylbenzene, ortho-xylene, meta-xylene, and para-xylene) on urban road surfaces is presented. The model required three traffic parameters, namely average daily traffic (ADT), volume to capacity ratio (V/C), and surface texture depth (STD), and two chemical parameters, namely total suspended solid (TSS) and total organic carbon (TOC), as predictor variables. Principal component analysis and two phase factor analysis were performed to characterize the model calibration parameters. Traffic congestion was found to be the underlying cause of traffic-related VOC buildup on urban roads. The model calibration was optimized using orthogonal experimental design. Partial least squares regression was used for model prediction. It was found that a better optimized orthogonal design could be achieved by including the latent factors of the data matrix into the design. The model performed fairly accurately for three different land uses as well as five different particle size fractions. The relative prediction errors were 10–40% for the different size fractions and 28–40% for the different land uses while the coefficients of variation of the predicted intersite VOC concentrations were in the range of 25–45% for the different size fractions. Considering the sizes of the data matrices, these coefficients of variation were within the acceptable interlaboratory range for analytes at ppb concentration levels.
Resumo:
The paper provides a systematic approach to designing the laboratory phase of a multiphase experiment, taking into account previous phases. General principles are outlined for experiments in which orthogonal designs can be employed. Multiphase experiments occur widely, although their multiphase nature is often not recognized. The need to randomize the material produced from the first phase in the laboratory phase is emphasized. Factor-allocation diagrams are used to depict the randomizations in a design and the use of skeleton analysis-of-variance (ANOVA) tables to evaluate their properties discussed. The methods are illustrated using a scenario and a case study. A basis for categorizing designs is suggested. This article has supplementary material online.
Resumo:
Results of Raman spectroscopic studies of (NH4)2ZnBr4 crystal in the spectral range from 20-250 cm-1 and over a range of temperature from 90K to 440K covering the low temperature ferroelectric and high temperature incommensurate phases are presented. The plots of the integrated areas and peak heights of the strong Raman lines versus temperature show anomalous behaviour near the two phase transitions.
Resumo:
Short-time analytical solutions of temperature and moving boundary in two-dimensional two-phase freezing due to a cold spot are presented in this paper. The melt occupies a semi-infinite region. Although the method of solution is valid for various other types of boundary conditions, the results in this paper are given only for the prescribed flux boundary conditions which could be space and time dependent. The freezing front propagations along the interior of the melt region exhibit well known behaviours but the propagations along the surface are of new type. The freezing front always depends on material parameters. Several interesting results can be obtained as particular cases of the general results.
Resumo:
The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image coordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
Electrical resistance (R) measurements are reported for ternary mixtures of 3-methylpyridine, water and heavy water as a function of temperature (T) and heavy water content in total water. These mixtures exhibit a limited two-phase region marked by a loop size (ΔT) that goes to zero as the double critical point (DCP) is approached. The measurements scanned the ΔT range 1.010°C less-than-or-equals, slant ΔT less-than-or-equals, slant 77.5°C. The critical exponent (θ), which signifies the divergence of ∂R/∂T, doubles within our experimental uncertainties as the DCP is reached very closely.
Resumo:
Phase relations in the system CaO-Fe2O3-Y2O3 in air (P-O2/P-o = 0.21) were explored by equilibrating samples representing eleven compositions in the ternary at 1273 K, followed by quenching to room temperature and phase identification using XRD. Limited mutual solubility was observed between YFeO3 and Ca2Fe2O5. No quaternary oxide was identified. An isothermal section of the phase diagram at 1273 K was constructed from the results. Five three-phase regions and four extended two-phase regions were observed. The extended two-phase regions arise from the limited solid solutions based on the ternary oxides YFeO3 and Ca2Fe2O5. Activities of CaO, Fe2O3 and Y2O3 in the three-phase fields were computed using recently measured thermodynamic data on the ternary oxides. The experimental phase diagram is consistent with thermodynamic data. The computed activities of CaO indicate that compositions of CaO-doped YFeO3 exhibiting good electrical conductivity are not compatible with zirconia-based electrolytes; CaO will react with ZrO2 to form CaZrO3.
Resumo:
Variable-temperature X-ray diffraction studies of C70 suggest the occurrence of two phase transitions around 350 and 280 K where the high-temperature phase is fcc and the low-temperature phase is monoclinic, best described as a distorted hcp structure with a doubled unit cell; two like-phases (possibly hcp) seem to coexist in the 280-350 K range. Application of pressure gives rise to three distinct transitions associated with characteristic pressure coefficients, the extrapolated values of the transition temperatures at ambient pressure being around 340, 325 and 270 K. Pressure delineates closely related phases Of C70 just as in the case Of C60 which exhibits two orientational phase transitions at high pressures.
Resumo:
Phase relations in the system Mn-Rh-O are established at 1273 K by equilibrating different compositions either in evacuated quartz ampules or in pure oxygen at a pressure of 1.01 x 10(5) Pa. The quenched samples are examined by optical microscopy, X-ray diffraction, and energy-dispersive X-ray analysis (EDAX). The alloys and intermetallics in the binary Mn-Rh system are found to be in equilibrium with MnO. There is only one ternary compound, MnRh2O4, with normal spinel structure in the system. The compound Mn3O4 has a tetragonal structure at 1273 K. A solid solution is formed between MnRh2O4 and Mn3O4. The solid solution has the cubic structure over a large range of composition and coexists with metallic rhodium. The partial pressure of oxygen corresponding to this two-phase equilibrium is measured as a function of the composition of the spinel solid solution and temperature. A new solid-state cell, with three separate electrode compartments, is designed to measure accurately the chemical potential of oxygen in the two-phase mixture, Rh + Mn3-2xRh2xO4, which has 1 degree of freedom at constant temperature. From the electromotive force (emf), thermodynamic mixing properties of the Mn3O4-MnRh2O4 solid solution and Gibbs energy of formation of MnRh2O4 are deduced. The activities exhibit negative deviations from Raoult's law for most of the composition range, except near Mn3O4, where a two-phase region exists. In the cubic phase, the entropy of mixing of the two Rh3+ and Mn3+ ions on the octahedral site of the spinel is ideal, and the enthalpy of mixing is positive and symmetric with respect to composition. For the formation of the spinel (sp) from component oxides with rock salt (rs) and orthorhombic (orth) structures according to the reaction, MnO (rs) + Rh2O3 (orth) --> MnRh2O4 (sp), DELTAG-degrees = -49,680 + 1.56T (+/-500) J mol-1. The oxygen potentials corresponding to MnO + Mn3O4 and Rh + Rh2O3 equilibria are also obtained from potentiometric measurements on galvanic cells incorporating yttria-stabilized zirconia as the solid electrolyte. From these results, an oxygen potential diagram for the ternary system is developed.
Resumo:
The single perovskite slab alkylammonium lead iodides (CnH2n+1NH3)(2)PbI4, n = 12, 16, 18, display two phase transitions, just above room temperature, associated with changes in the alkylammonium chains. We have followed these two phase transitions using scanning calorimetry, X-ray powder diffraction, and IR and Raman spectroscopies. We find the first phase transition to be associated with symmetry changes arising from a dynamic rotational disordering of the ammonium headgroup of the chain whereas the second transition, the melting of the chains in two dimensions, is characterized by an increased conformational disorder of the methylene units of the alkyl chains. We examine these phase transitions in light of the interesting optical properties of these materials, as well as the relevance of these systems as models for phase transitions in lipid bilayers.
Resumo:
Distributed space time coding for wireless relay networks where the source, the destination and the relays have multiple antennas have been studied by Jing and Hassibi. In this set up, the transmit and the receive signals at different antennas of the same relay are processed and designed independently, even though the antennas are colocated. In this paper, a wireless relay network with single antenna at the source and the destination and two antennas at each of the R relays is considered. In the first phase of the two-phase transmission model, a T -length complex vector is transmitted from the source to all the relays. At each relay, the inphase and quadrature component vectors of the received complex vectors at the two antennas are interleaved before processing them. After processing, in the second phase, a T x 2R matrix codeword is transmitted to the destination. The collection of all such codewords is called Co-ordinate interleaved distributed space-time code (CIDSTC). Compared to the scheme proposed by Jing-Hassibi, for T ges AR, it is shown that while both the schemes give the same asymptotic diversity gain, the CIDSTC scheme gives additional asymptotic coding gain as well and that too at the cost of negligible increase in the processing complexity at the relays.
Resumo:
Phase equilibria in the Cu-rich corner of the ternary system Cu-Al-Sn have been re-investigated. Final equilibrium microstructures of 20 ternary alloy compositions near Cu3Al were used to refine the ternary phase diagram. The microstructures were characterized using optical microscopy (OM), x-ray diffraction (XRD), electron probe microanalysis and transmission electron microscopy. Isothermal sections at 853, 845, 833, 818, 808, 803 and 773 K have been composed. Vertical sections have been drawn at 2 and 3 at% Sn, showing beta(1) as a stable phase. Three-phase fields (alpha + beta + beta(1)) and (beta + beta(1) + gamma(1)) result from beta -> alpha + beta(1) eutectoid and beta + gamma(1) -> beta(1) peritectoid reactions forming metastable beta(1) in the binary Cu-Al. With the lowering of temperature from 853 to 818 K, these three-phase fields are shifted to lower Sn concentrations, with simultaneous shrinkage and shifting of (beta + beta(1)) two-phase field. The three-phase field (alpha + beta + gamma(1)) resulting from the binary reaction beta -> alpha + gamma(1) shifts to higher Sn contents, with associated shrinkage of the beta field, with decreasing temperature. With further reduction of temperature, a new ternary invariant reaction beta + beta(1) -> alpha + gamma(1) is observed at similar to 813 K. The beta disappears completely at 803 K, giving rise to the three-phase field (alpha + beta(1) + gamma(1)). Some general guidelines on the role of ternary additions (M) on the stability of the ordered beta(1) phase are obtained by comparing the results of this study with data in the literature on other systems in the systems group Cu-Al-M.