889 resultados para School size.
Resumo:
Twenty first century learners operate in organic, immersive environments. A pedagogy of student-centred learning is not a recipe for rooms. A contemporary learning environment is like a landscape that grows, morphs, and responds to the pressures of the context and micro-culture. There is no single adaptable solution, nor a suite of off-the-shelf answers; propositions must be customisable and infinitely variable. They must be indeterminate and changeable; based on the creation of learning places, not restrictive or constraining spaces. A sustainable solution will be un-fixed, responsive to the life cycle of the components and materials, able to be manipulated by the users; it will create and construct its own history. Learning occurs as formal education with situational knowledge structures, but also as informal learning, active learning, blended learning social learning, incidental learning, and unintended learning. These are not spatial concepts but socio-cultural patterns of discovery. Individual learning requirements must run free and need to be accommodated as the learner sees fit. The spatial solution must accommodate and enable a full array of learning situations. It is a system not an object. Three major components: 1. The determinate landscape: in-situ concrete 'plate' that is permanent. It predates the other components of the system and remains as a remnant/imprint/fossil after the other components of the system have been relocated. It is a functional learning landscape in its own right; enabling a variety of experiences and activities. 2. The indeterminate landscape: a kit of pre-fabricated 2-D panels assembled in a unique manner at each site to suit the client and context. Manufactured to the principles of design-for-disassembly. A symbiotic barnacle like system that attaches itself to the existing infrastructure through the determinate landscape which acts as a fast growth rhizome. A carapace of protective panels, infinitely variable to create enclosed, semi-enclosed, and open learning places. 3. The stations: pre-fabricated packages of highly-serviced space connected through the determinate landscape. Four main types of stations; wet-room learning centres, dry-room learning centres, ablutions, and low-impact building services. Entirely customised at the factory and delivered to site. The stations can be retro-fitted to suit a new context during relocation. Principles of design for disassembly: material principles • use recycled and recyclable materials • minimise the number of types of materials • no toxic materials • use lightweight materials • avoid secondary finishes • provide identification of material types component principles • minimise/standardise the number of types of components • use mechanical not chemical connections • design for use of common tools and equipment • provide easy access to all components • make component size to suite means of handling • provide built in means of handling • design to realistic tolerances • use a minimum number of connectors and a minimum number of types system principles • design for durability and repeated use • use prefabrication and mass production • provide spare components on site • sustain all assembly and material information
Resumo:
One aim of the Australasian Nutrition Care Day Survey was to explore nutrition care practices in acute care hospital wards across Australia and New Zealand. Managers of Dietetic departments completed a questionnaire regarding ward nutrition care practices. Overall, 370 wards from 56 hospitals participated. The median ward size was 28 beds (range: 8–60 beds). Although there was a wide variation in full-time equivalent availability of dietitians (median: 0.3; range: 0–1.4), their involvement in providing nutrition care across ward specialities was signifi cantly higher than other staff members (χ2, p < 0.01). Feeding assistance, available in 89% of the wards, was provided mainly by nursing staff and family members (χ2, p < 0.01). Protected meal times were implemented in 5% (n = 18) of the wards. Fifty-three percent of the wards (n = 192) weighed patients on request and 40% (n = 148) on admission. Routine malnutrition screening was conducted in 63% (n = 232) of the wards and 79% (n = 184) of these wards used the Malnutrition Screening Tool, 16% (n = 37) the Malnutrition Universal Screening Tool, and 5% (n = 11) other tools. Nutrition rescreening was routinely conducted in 20% of the wards. Among wards that implemented nutrition screening, 41% (n = 100) routinely referred patients “at risk” of malnutrition to dietitians as part of their standard protocol for malnutrition management. Results of this study provide new knowledge regarding current nutrition care practice, highlight gaps in existing practice, and can be used to inform improved nutrition care in acute care wards across Australia and New Zealand.
Resumo:
Purpose: Virally mediated head and neck cancers (VMHNC) often present with nodal involvement, and are generally considered radioresponsive, resulting in the need for a re-planning CT during radiotherapy (RT) in a subset of patients. We sought to identify a high-risk group based on nodal size to be evaluated in a future prospective adaptive RT trial. Methodology: Between 2005-2010, 121 patients with virally-mediated, node positive nasopharyngeal (EBV positive) or oropharyngeal (HPV positive) cancers, receiving curative intent RT were reviewed. Patients were analysed based on maximum size of the dominant node with a view to grouping them in varying risk categories for the need of re-planning. The frequency and timing of the re-planning scans were also evaluated. Results: Sixteen nasopharyngeal and 105 oropharyngeal tumours were reviewed. Twenty-five (21%) patients underwent a re-planning CT at a median of 22 (range, 0-29) fractions with 1 patient requiring re-planning prior to the commencement of treatment. Based on the analysis, patients were subsequently placed into 3 groups; ≤35mm (Group 1), 36-45mm (Group 2), ≥46mm (Group 3). Re-planning CT’s were performed in Group 1- 8/68 (11.8%), Group 2- 4/28 (14.3%), Group 3- 13/25 (52%). Sample size did not allow statistical analysis to detect a significant difference or exclusion of a lack of difference between the 3 groups. Conclusion: In this series, patients with VMHNC and nodal size > 46mm appear to be a high-risk group for the need of re-planning during a course of definitive radiotherapy. This finding will now be tested in a prospective adaptive RT study.
Resumo:
Purpose: Virally mediated head and neck cancers (VMHNC) often present with nodal involvement, and are generally considered radioresponsive, resulting in the need for plan adaptation during radiotherapy in a subset of patients. We sought to identify a high-risk group based on pre-treatment nodal size to be evaluated in a future prospective adaptive radiotherapy trial. Methodology: Between 2005-2010, 121 patients with virally-mediated, node positive nasopharyngeal or oropharyngeal cancers, receiving definitive radiotherapy were reviewed. Patients were analysed based on maximum size of the dominant node at diagnosis with a view to grouping them in varying risk categories for the need of re-planning. The frequency and timing of the re-planning scans were also evaluated. Results: Sixteen nasopharyngeal and 105 oropharyngeal tumours were reviewed. Twenty-five (21%) patients underwent a re-planning CT at a median of 22 (range, 0-29) fractions with 1 patient requiring re-planning prior to the commencement of treatment. Based on the analysis, patients were subsequently placed into 3 groups defined by pre-treatment nodal size; ≤ 35mm (Group 1), 36-45mm (Group 2), ≥ 46mm (Group 3). Applying these groups to the patient cohort, re-planning CT’s were performed in Group 1- 8/68 (11.8%), Group 2- 4/28 (14.3%), Group 3- 13/25 (52%). Conclusion: In this series, patients with VMHNC and nodal size > 46mm appear to be a high-risk group for the need of plan adaptation during a course of definitive radiotherapy. This finding will now be tested in a prospective adaptive radiotherapy study.
Resumo:
Laboratory-based studies of human dietary behaviour benefit from highly controlled conditions; however, this approach can lack ecological validity. Identifying a reliable method to capture and quantify natural dietary behaviours represents an important challenge for researchers. In this study, we scrutinised cafeteria-style meals in the ‘Restaurant of the Future.’ Self-selected meals were weighed and photographed, both before and after consumption. Using standard portions of the same foods, these images were independently coded to produce accurate and reliable estimates of (i) initial self-served portions, and (ii) food remaining at the end of the meal. Plate cleaning was extremely common; in 86% of meals at least 90% of self-selected calories were consumed. Males ate a greater proportion of their self-selected meals than did females. Finally, when participants visited the restaurant more than once, the correspondence between selected portions was better predicted by the weight of the meal than by its energy content. These findings illustrate the potential benefits of meal photography in this context. However, they also highlight significant limitations, in particular, the need to exclude large amounts of data when one food obscures another.
Resumo:
This paper explores grassroots leadership, an under-researched and often side-lined approach to leadership that operates outside of formal bureaucratic structures. The paper’s central purpose is the claim that an understanding of grassroots leadership and tactics used by grassroots leaders provides valuable insights for the study of school leadership. In this paper, we present and discuss an original model of grassroots leadership based on the argument that this under-researched area can further our understanding of school leadership. Drawing upon the limited literature in the field, we present a model consisting of two approaches to change (i.e. conflict and consensus) and two categories of change (i.e. reform and refinement) and then provide illustrations of how the model works in practice. We make the argument that the model has much merit for conceptualizing school leadership, and this is illustrated by applying the model to formal bureaucratic leadership within school contexts. Given the current climate in education where business and management language is pervasive within leadership-preparation programs, we argue that it is timely for university academics, who are responsible for preparing school leaders to consider broadening their approach by exposing school leaders to a variety of change-based strategies and tactics used by grassroots leaders.
Resumo:
Since 2007 Kite Arts Education Program (KITE), based at Queensland Performing Arts Centre (QPAC), has been engaged in delivering a series of theatre-based experiences for children in low socio-economic primary schools in Queensland. The artist in residence (AIR) project titled Yonder includes performances developed by the children with the support and leadership of teacher artists from KITE for their community and parents/carers,supported by a peak community cultural institution. In 2009,Queensland Performing Arts Centre partnered with Queensland University of Technology (QUT) Creative Industries Faculty (Drama) to conduct a three-year evaluation of the Yonder project to understand the operational dynamics, artistic outputs and the educational benefits of the project. This paper outlines the research findings for children engaged in the Yonder project in the interrelated areas of literacy development and social competencies. Findings are drawn from six iterations of the project in suburban locations on the edge of Brisbane city and in regional Queensland.
Resumo:
ICT (Information and Communication Technology) creates numerous opportunities for teachers to re-think their pedagogies. In subjects like mathematics which draws upon abstract concepts, ICT creates such an opportunity. Instead of a mimetic pedagogical approach, suitably designed activities with ICT can enable learners to engage more proactively with their learning. In this quasi-experimental designed study, ICT was used in teaching mathematics to a group of first year high school students (N=25) in Australia. The control group was taught predominantly through traditional pedagogies (N=22). Most of the variables that had previously impacted on the design of such studies were suitably controlled in this yearlong investigation. Quantitative and qualitative results showed that students who were taught by ICT driven pedagogies benefitted from the experience. Pre and post-test means showed that there was a difference between the treatment and control groups. Of greater significance was that the students (in the treatment group) believed that the technology enabled them to engage more with their learning.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
One aspect of quality education in the 21st century is the availability of digital resources in schools. Many developing countries need to build this capability – not just in terms of technology but teacher capability as well. One of the ways to achieve such capacity is through knowledge sharing between teachers and educators in developed and developing countries. Over time such collaboration can have a lasting impact on all participants on both sides of the digital divide. This paper reports on how such collaboration can occur. It focuses on the initial stages of a long-term initiative where our primary objective is to develop models, which demonstrate how we (in developed countries) can engage productively and meaningfully with schools in developing countries to build their ICT capacity. As part of this initiative, we introduced laptops and LEGO robotics tool kits to a rural primary school in Fiji. We developed ICT activities that aligned with the curriculum in a number of subjects. In addition, we worked with the teachers over two weeks to build their expertise.
Resumo:
This article outlines the integration of robotics in two settings in a primary school. This initiative was part of an Australian Research Council project which was undertaken at this school. The article highlights how robotics was integrated in a technology unit in a year four class. It also explains how it was embedded into an after-school program which catered for students from years five to seven. From these experiences further possibilities of engaging with robotics are also discussed.
Resumo:
This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pairwise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighboring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools.
Resumo:
YBa2Cu3O7-x wires have been extruded with 2 and 5 wt.% of hydroxy propyl methylcellulose (HPMC) as binder. Both sets of wires sintered below 930°C have equiaxed grains while the wires sintered above this temperature have elongated grains. In the temperature range which gives equiaxed grains, the wires extruded with 5 wt.% HPMC have higher grain size and density. Cracks along the grain boundaries are often observed in the wires having elongated grains. Critical current density, Jc, increases initially, reaches a peak and then decreases with the sintering temperature. The sintering temperature giving a peak in Jc strongly depends on the heat treatment scheme for the wires extruded with 5 wt.% HPMC. TEM studies show that defective layers are formed along grain boundaries for the wires extruded with 5 wt.% HPMC after 5 h oxygenation. After 55 h oxygenation, the defective layers become more localised and grain boundaries adopt an overall cleaner appearance. Densification with equiaxed grains and clean grain boundaries produces the highest Jc's for polycrystalline YBa2Cu3O7 wires.
Resumo:
YBCO wires which consist of well oriented plate-like fine grains are fabricated using a moving furnace to achieve higher mechanical strength. Melt-texturing experiments have been undertaken on YBCO wires with two different compositions: YBa1.5Cu2.9O7-x, and YBa1.8Cu3.0O7-x. Wires are extruded from a mixture of precursor powders (formed by a coprecipitation process) then textured by firing in a moving furnace. Size of secondary phases such as barium cuprate and copper oxide, and overall composition of the sample affect the orientation of the fine grains. At zero magnetic field, the YBa1.5Cu2.9O7-x wire shows the highest critical current density of 1,450 Acm-2 and 8,770 Acm-2 at 77K and 4.2K, respectively. At 1 T, critical current densities of 30 Acm-2 and 200 Acm-2, respectively, are obtained at 77K and 4.2K. Magnetisation curves are also obtained for one sample to evaluate critical current density using the Bean model. Analysis of the microstructure indicates that the starting composition of the green body significantly affects the achievement of grain alignment via melt-texturing processes.