457 resultados para Interior point methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multivariate methods are required to assess the interrelationships among multiple, concurrent symptoms. We examined the conceptual and contextual appropriateness of commonly used multivariate methods for cancer symptom cluster identification. From 178 publications identified in an online database search of Medline, CINAHL, and PsycINFO, limited to articles published in English, 10 years prior to March 2007, 13 cross-sectional studies met the inclusion criteria. Conceptually, common factor analysis (FA) and hierarchical cluster analysis (HCA) are appropriate for symptom cluster identification, not principal component analysis. As a basis for new directions in symptom management, FA methods are more appropriate than HCA. Principal axis factoring or maximum likelihood factoring, the scree plot, oblique rotation, and clinical interpretation are recommended approaches to symptom cluster identification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerous expert elicitation methods have been suggested for generalised linear models (GLMs). This paper compares three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression. These methods were trialled on two experts in order to model the habitat suitability of the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The first elicitation approach is a geographically assisted indirect predictive method with a geographic information system (GIS) interface. The second approach is a predictive indirect method which uses an interactive graphical tool. The third method uses a questionnaire to elicit expert knowledge directly about the impact of a habitat variable on the response. Two variables (slope and aspect) are used to examine prior and posterior distributions of the three methods. The results indicate that there are some similarities and dissimilarities between the expert informed priors of the two experts formulated from the different approaches. The choice of elicitation method depends on the statistical knowledge of the expert, their mapping skills, time constraints, accessibility to experts and funding available. This trial reveals that expert knowledge can be important when modelling rare event data, such as threatened species, because experts can provide additional information that may not be represented in the dataset. However care must be taken with the way in which this information is elicited and formulated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes new droop control methods for load sharing in a rural area with distributed generation. Highly resistive lines, typical of rural low voltage networks, always create a big challenge for conventional droop control. To overcome the conflict between higher feedback gain for better power sharing and system stability in angle droop, two control methods have been proposed. The first method considers no communication among the distributed generators (DGs) and regulates the converter output voltage and angle ensuring proper sharing of load in a system having strong coupling between real and reactive power due to high line resistance. The second method, based on a smattering of communication, modifies the reference output volt-age angle of the DGs depending on the active and reactive power flow in the lines connected to point of common coupling (PCC). It is shown that with the second proposed control method, an economical and minimum communication system can achieve significant improvement in load sharing. The difference in error margin between proposed control schemes and a more costly high bandwidth communication system is small and the later may not be justified considering the increase in cost. The proposed control shows stable operation of the system for a range of operating conditions while ensuring satisfactory load sharing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new steady state method for determination of the electron diffusion length in dye-sensitized solar cells (DSCs) is described and illustrated with data obtained using cells containing three different types of electrolyte. The method is based on using near-IR absorbance methods to establish pairs of illumination intensity for which the total number of trapped electrons is the same at open circuit (where all electrons are lost by interfacial electron transfer) as at short circuit (where the majority of electrons are collected at the contact). Electron diffusion length values obtained by this method are compared with values derived by intensity modulated methods and by impedance measurements under illumination. The results indicate that the values of electron diffusion length derived from the steady state measurements are consistently lower than the values obtained by the non steady-state methods. For all three electrolytes used in the study, the electron diffusion length was sufficiently high to guarantee electron collection efficiencies greater than 90%. Measurement of the trap distributions by near-IR absorption confirmed earlier observations of much higher electron trap densities for electrolytes containing Li+ ions. It is suggested that the electron trap distributions may not be intrinsic properties of the TiO2 nanoparticles, but may be associated with electron-ion interactions.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Work-related injuries in Australia are estimated to cost around $57.5 billion annually, however there are currently insufficient surveillance data available to support an evidence-based public health response. Emergency departments (ED) in Australia are a potential source of information on work-related injuries though most ED’s do not have an ‘Activity Code’ to identify work-related cases with information about the presenting problem recorded in a short free text field. This study compared methods for interrogating text fields for identifying work-related injuries presenting at emergency departments to inform approaches to surveillance of work-related injury.---------- Methods: Three approaches were used to interrogate an injury description text field to classify cases as work-related: keyword search, index search, and content analytic text mining. Sensitivity and specificity were examined by comparing cases flagged by each approach to cases coded with an Activity code during triage. Methods to improve the sensitivity and/or specificity of each approach were explored by adjusting the classification techniques within each broad approach.---------- Results: The basic keyword search detected 58% of cases (Specificity 0.99), an index search detected 62% of cases (Specificity 0.87), and the content analytic text mining (using adjusted probabilities) approach detected 77% of cases (Specificity 0.95).---------- Conclusions The findings of this study provide strong support for continued development of text searching methods to obtain information from routine emergency department data, to improve the capacity for comprehensive injury surveillance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the suggestion that Markov switching (MS) models should be used to determine cyclical turning points. A Kalman filter approximation is used to derive the dating rules implicit in such models. We compare these with dating rules in an algorithm that provides a good approximation to the chronology determined by the NBER. We find that there is very little that is attractive in the MS approach when compared with this algorithm. The most important difference relates to robustness. The MS approach depends on the validity of that statistical model. Our approach is valid in a wider range of circumstances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hamilton (2001) makes a number of comments on our paper (Harding and Pagan, 2002b). The objectives of this rejoinder are, firstly, to note the areas in which we agree; secondly, to define with greater clarity the areas in which we disagree; and, thirdly, to point to other papers, including a longer version of this response, where we have dealt with some of the issues that he raises. The core of our debate with him is whether one should use an algorithm with a specified set of rules for determining the turning points in economic activity or whether one should use a parametric model that features latent states. Hamilton begins his criticism by stating that there is a philosophical distinction between the two methods for dating cycles and concludes that the method we use “leaves vague and intuitive exactly what this algorithm is intended to measure”. Nothing is further from the truth. When seeking ways to decide on whether a turning point has occurred it is always useful to ask the question, what is a recession? Common usage suggests that it is a decline in the level of economic activity that lasts for some time. For this reason it has become standard to describe a recession as a decline in GDP that lasts for more than two quarters. Finding periods in which quarterly GDP declined for two periods is exactly what our approach does. What is vague about this?