417 resultados para Melting methods
Resumo:
Multivariate methods are required to assess the interrelationships among multiple, concurrent symptoms. We examined the conceptual and contextual appropriateness of commonly used multivariate methods for cancer symptom cluster identification. From 178 publications identified in an online database search of Medline, CINAHL, and PsycINFO, limited to articles published in English, 10 years prior to March 2007, 13 cross-sectional studies met the inclusion criteria. Conceptually, common factor analysis (FA) and hierarchical cluster analysis (HCA) are appropriate for symptom cluster identification, not principal component analysis. As a basis for new directions in symptom management, FA methods are more appropriate than HCA. Principal axis factoring or maximum likelihood factoring, the scree plot, oblique rotation, and clinical interpretation are recommended approaches to symptom cluster identification.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
ADI-Euler and extrapolation methods for the two-dimensional fractional advection-dispersion equation
Resumo:
Numerous expert elicitation methods have been suggested for generalised linear models (GLMs). This paper compares three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression. These methods were trialled on two experts in order to model the habitat suitability of the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The first elicitation approach is a geographically assisted indirect predictive method with a geographic information system (GIS) interface. The second approach is a predictive indirect method which uses an interactive graphical tool. The third method uses a questionnaire to elicit expert knowledge directly about the impact of a habitat variable on the response. Two variables (slope and aspect) are used to examine prior and posterior distributions of the three methods. The results indicate that there are some similarities and dissimilarities between the expert informed priors of the two experts formulated from the different approaches. The choice of elicitation method depends on the statistical knowledge of the expert, their mapping skills, time constraints, accessibility to experts and funding available. This trial reveals that expert knowledge can be important when modelling rare event data, such as threatened species, because experts can provide additional information that may not be represented in the dataset. However care must be taken with the way in which this information is elicited and formulated.
Resumo:
The highly variable flagellin-encoding flaA gene has long been used for genotyping Campylobacter jejuni and Campylobacter coli. High-resolution melting (HRM) analysis is emerging as an efficient and robust method for discriminating DNA sequence variants. The objective of this study was to apply HRM analysis to flaA-based genotyping. The initial aim was to identify a suitable flaA fragment. It was found that the PCR primers commonly used to amplify the flaA short variable repeat (SVR) yielded a mixed PCR product unsuitable for HRM analysis. However, a PCR primer set composed of the upstream primer used to amplify the fragment used for flaA restriction fragment length polymorphism (RFLP) analysis and the downstream primer used for flaA SVR amplification generated a very pure PCR product, and this primer set was used for the remainder of the study. Eighty-seven C. jejuni and 15 C. coli isolates were analyzed by flaA HRM and also partial flaA sequencing. There were 47 flaA sequence variants, and all were resolved by HRM analysis. The isolates used had previously also been genotyped using single-nucleotide polymorphisms (SNPs), binary markers, CRISPR HRM, and flaA RFLP. flaAHRManalysis provided resolving power multiplicative to the SNPs, binary markers, and CRISPR HRM and largely concordant with the flaA RFLP. It was concluded that HRM analysis is a promising approach to genotyping based on highly variable genes.
Resumo:
A new steady state method for determination of the electron diffusion length in dye-sensitized solar cells (DSCs) is described and illustrated with data obtained using cells containing three different types of electrolyte. The method is based on using near-IR absorbance methods to establish pairs of illumination intensity for which the total number of trapped electrons is the same at open circuit (where all electrons are lost by interfacial electron transfer) as at short circuit (where the majority of electrons are collected at the contact). Electron diffusion length values obtained by this method are compared with values derived by intensity modulated methods and by impedance measurements under illumination. The results indicate that the values of electron diffusion length derived from the steady state measurements are consistently lower than the values obtained by the non steady-state methods. For all three electrolytes used in the study, the electron diffusion length was sufficiently high to guarantee electron collection efficiencies greater than 90%. Measurement of the trap distributions by near-IR absorption confirmed earlier observations of much higher electron trap densities for electrolytes containing Li+ ions. It is suggested that the electron trap distributions may not be intrinsic properties of the TiO2 nanoparticles, but may be associated with electron-ion interactions.
Resumo:
Background: Work-related injuries in Australia are estimated to cost around $57.5 billion annually, however there are currently insufficient surveillance data available to support an evidence-based public health response. Emergency departments (ED) in Australia are a potential source of information on work-related injuries though most ED’s do not have an ‘Activity Code’ to identify work-related cases with information about the presenting problem recorded in a short free text field. This study compared methods for interrogating text fields for identifying work-related injuries presenting at emergency departments to inform approaches to surveillance of work-related injury.---------- Methods: Three approaches were used to interrogate an injury description text field to classify cases as work-related: keyword search, index search, and content analytic text mining. Sensitivity and specificity were examined by comparing cases flagged by each approach to cases coded with an Activity code during triage. Methods to improve the sensitivity and/or specificity of each approach were explored by adjusting the classification techniques within each broad approach.---------- Results: The basic keyword search detected 58% of cases (Specificity 0.99), an index search detected 62% of cases (Specificity 0.87), and the content analytic text mining (using adjusted probabilities) approach detected 77% of cases (Specificity 0.95).---------- Conclusions The findings of this study provide strong support for continued development of text searching methods to obtain information from routine emergency department data, to improve the capacity for comprehensive injury surveillance.
Resumo:
We study the suggestion that Markov switching (MS) models should be used to determine cyclical turning points. A Kalman filter approximation is used to derive the dating rules implicit in such models. We compare these with dating rules in an algorithm that provides a good approximation to the chronology determined by the NBER. We find that there is very little that is attractive in the MS approach when compared with this algorithm. The most important difference relates to robustness. The MS approach depends on the validity of that statistical model. Our approach is valid in a wider range of circumstances.
Resumo:
This paper presents the findings of an investigation into the rate-limiting mechanism for the heterogeneous burning in oxygen under normal gravity and microgravity of cylindrical iron rods. The original objective of the work was to determine why the observed melting rate for burning 3.2-mm diameter iron rods is significantly higher in microgravity than in normal gravity. This work, however, also provided fundamental insight into the rate-limiting mechanism for heterogeneous burning. The paper includes a summary of normal-gravity and microgravity experimental results, heat transfer analysis and post-test microanalysis of quenched samples. These results are then used to show that heat transfer across the solid/liquid interface is the rate-limiting mechanism for melting and burning, limited by the interfacial surface area between the molten drop and solid rod. In normal gravity, the work improves the understanding of trends reported during standard flammability testing for metallic materials, such as variations in melting rates between test specimens with the same cross-sectional area but different crosssectional shape. The work also provides insight into the effects of configuration and orientation, leading to an improved application of standard test results in the design of oxygen system components. For microgravity applications, the work enables the development of improved methods for lower cost metallic material flammability testing programs. In these ways, the work provides fundamental insight into the heterogeneous burning process and contributes to improved fire safety for oxygen systems in applications involving both normal-gravity and microgravity environments.
Resumo:
This paper presents a proposed qualitative framework to discuss the heterogeneous burning of metallic materials, through parameters and factors that influence the melting rate of the solid metallic fuel (either in a standard test or in service). During burning, the melting rate is related to the burning rate and is therefore an important parameter for describing and understanding the burning process, especially since the melting rate is commonly recorded during standard flammability testing for metallic materials and is incorporated into many relative flammability ranking schemes. However, whilst the factors that influence melting rate (such as oxygen pressure or specimen diameter) have been well characterized, there is a need for an improved understanding of how these parameters interact as part of the overall melting and burning of the system. Proposed here is the ‘Melting Rate Triangle’, which aims to provide this focus through a conceptual framework for understanding how the melting rate (of solid fuel) is determined and regulated during heterogeneous burning. In the paper, the proposed conceptual model is shown to be both (a) consistent with known trends and previously observed results, and (b)capable of being expanded to incorporate new data. Also shown are examples of how the Melting Rate Triangle can improve the interpretation of flammability test results. Slusser and Miller previously published an ‘Extended Fire Triangle’ as a useful conceptual model of ignition and the factors affecting ignition, providing industry with a framework for discussion. In this paper it is shown that a ‘Melting Rate Triangle’ provides a similar qualitative framework for burning, leading to an improved understanding of the factors affecting fire propagation and extinguishment.