807 resultados para Teaching performance assessment
Resumo:
Objective To develop the DCDDaily, an instrument for objective and standardized clinical assessment of capacity in activities of daily living (ADL) in children with developmental coordination disorder (DCD), and to investigate its usability, reliability, and validity. Subjects Five to eight-year-old children with and without DCD. Main measures The DCDDaily was developed based on thorough review of the literature and extensive expert involvement. To investigate the usability (assessment time and feasibility), reliability (internal consistency and repeatability), and validity (concurrent and discriminant validity) of the DCDDaily, children were assessed with the DCDDaily and the Movement Assessment Battery for Children-2 Test, and their parents filled in the Movement Assessment Battery for Children-2 Checklist and Developmental Coordination Disorder Questionnaire. Results 459 children were assessed (DCD group, n = 55; normative reference group, n = 404). Assessment was possible within 30 minutes and in any clinical setting. For internal consistency, Cronbach’s α = 0.83. Intraclass correlation = 0.87 for test–retest reliability and 0.89 for inter-rater reliability. Concurrent correlations with Movement Assessment Battery for Children-2 Test and questionnaires were ρ = −0.494, 0.239, and −0.284, p < 0.001. Discriminant validity measures showed significantly worse performance in the DCD group than in the control group (mean (SD) score 33 (5.6) versus 26 (4.3), p < 0.001). The area under curve characteristic = 0.872, sensitivity and specificity were 80%. Conclusions The DCDDaily is a valid and reliable instrument for clinical assessment of capacity in ADL, that is feasible for use in clinical practice.
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.
Resumo:
Health challenges present arguably the most significant barrier to sustainable global development. The introduction of ICT in healthcare, especially the application of mobile communications, has created the potential to transform healthcare delivery by making it more accessible, affordable and effective across the developing world. However, current research into the assessment of mHealth from the perspective of developing countries particularly with community Health workers (CHWs) as primary users continues to be limited. The aim of this study is to analyze the contribution of mHealth in enhancing the performance of the health workers and its alignment with existing workflows to guide its utilization. The proposed research takes into account this consideration and aims to examine the task-technology alignment of mHealth for CHWs drawing upon the task technology fit as the theoretical foundation.
Resumo:
Several hypnosis monitoring systems based on the processed electroencephalogram (EEG) have been developed for use during general anesthesia. The assessment of the analgesic component (antinociception) of general anesthesia is an emerging field of research. This study investigated the interaction of hypnosis and antinociception, the association of several physiological variables with the degree of intraoperative nociception, and aspects of EEG Bispectral Index Scale (BIS) monitoring during general anesthesia. In addition, EEG features and heart rate (HR) responses during desflurane and sevoflurane anesthesia were compared. A propofol bolus of 0.7 mg/kg was more effective than an alfentanil bolus of 0.5 mg in preventing the recurrence of movement responses during uterine dilatation and curettage (D C) after a propofol-alfentanil induction, combined with nitrous oxide (N2O). HR and several HR variability-, frontal electromyography (fEMG)-, pulse plethysmography (PPG)-, and EEG-derived variables were associated with surgery-induced movement responses. Movers were discriminated from non-movers mostly by the post-stimulus values per se or normalized with respect to the pre-stimulus values. In logistic regression analysis, the best classification performance was achieved with the combination of normalized fEMG power and HR during D C (overall accuracy 81%, sensitivity 53%, specificity 95%), and with the combination of normalized fEMG-related response entropy, electrocardiography (ECG) R-to-R interval (RRI), and PPG dicrotic notch amplitude during sevoflurane anesthesia (overall accuracy 96%, sensitivity 90%, specificity 100%). ECG electrode impedances after alcohol swab skin pretreatment alone were higher than impedances of designated EEG electrodes. The BIS values registered with ECG electrodes were higher than those registered simultaneously with EEG electrodes. No significant difference in the time to home-readiness after isoflurane-N2O or sevoflurane-N2O anesthesia was found, when the administration of the volatile agent was guided by BIS monitoring. All other early and intermediate recovery parameters were also similar. Transient epileptiform EEG activity was detected in eight of 15 sevoflurane patients during a rapid increase in the inspired volatile concentration, and in none of the 16 desflurane patients. The observed transient EEG changes did not adversely affect the recovery of the patients. Following the rapid increase in the inhaled desflurane concentration, HR increased transiently, reaching its maximum in two minutes. In the sevoflurane group, the increase was slower and more subtle. In conclusion, desflurane may be a safer volatile agent than sevoflurane in patients with a lowered seizure threshold. The tachycardia induced by a rapid increase in the inspired desflurane concentration may present a risk for patients with heart disease. Designated EEG electrodes may be superior to ECG electrodes in EEG BIS monitoring. When the administration of isoflurane or sevoflurane is adjusted to maintain BIS values at 50-60 in healthy ambulatory surgery patients, the speed and quality of recovery are similar after both isoflurane-N2O and sevoflurane-N2O anesthesia. When anesthesia is maintained by the inhalation of N2O and bolus doses of propofol and alfentanil in healthy unparalyzed patients, movement responses may be best avoided by ensuring a relatively deep hypnotic level with propofol. HR/RRI, fEMG, and PPG dicrotic notch amplitude are potential indicators of nociception during anesthesia, but their performance needs to be validated in future studies. Combining information from different sources may improve the discrimination of the level of nociception.
Resumo:
This project began in 2013, with the award of an internal QUT Teaching and Learning grant. The task we wished to undertake was to document and better understand the role of studio teaching practice in the Creative Industries Faculty. While it was well understood that the Faculty had long used studio pedagogies as a key part of its teaching approach, organizational and other changes made it productive and timely to consider how the various study areas within the Faculty were approaching studio teaching. Chief among these changes were innovations in the use of technology in teaching, and at an organizational level the merging of what were once two schools within different faculties into a newly-structured Creative Industries Faculty. The new faculty consists of two schools, Media, Entertainment and Creative Art (MECA) and Design. We hoped to discover more about how studio techniques were developing alongside an ever-increasing number of options for content delivery, assessment, and interaction with students. And naturally we wanted to understand such developments across the broad range of nineteen study areas now part of the Creative Industries Faculty. This e-book represents the first part of our project, which in the main consisted in observing the teaching practices used in eight units across the Faculty, and then interviews with the unit coordinators involved. In choosing units, we opted for a broad opening definition of ‘studio’ to include not only traditional studios but also workshops and tutorials in which we could identify a component of studio teaching as enumerated by the Australian Learning and Teaching Council’s Studio Teaching Project: • A culture, a creative community created by a group of students and studio teachers working together for periods of time • A mode of teaching and learning where students and studio teachers interact in a creative and reflective process • A program of projects and activities where content is structured to enable ‘learning in action’ • A physical space or constructed environment in which the teaching and learning can take place (Source: http://www.studioteaching.org/?page=what_is_studio) The units we chose to observe, and which we hoped would represent something of the diversity of our study areas, were: • Dance Project 1 • Furniture Studies • Wearable Architecture • Fashion Design 4 • Industrial Design 6 • Advanced Writing Practice 3 • Introduction to Creative Writing • Studio Art Practice 2 Over the course of two semesters in 2013, we attended classes, presentations, and studio time in these units, and then conducted interviews that we felt would give further insight into both individual and discipline-specific approaches to studio pedagogies. We asked the same questions in each of the interviews: • Could you describe the main focus and aims of your unit? • How do you use studio time to achieve those aims? • Can you give us an example of the kind of activities you use in your studio teaching? • What does/do these example(s) achieve in terms of learning outcomes? • What, if any, is the role of technology in your studio teaching practice? • What do you consider distinctive about your approach to studio teaching, or the approach taken in your discipline area? The unit coordinators’ responses to these questions form some of the most interesting and valuable material in this book, and point to both consistencies in approach and teaching philosophies, as well as areas of difference. We believe that both can help to raise our critical awareness of studio teaching, and provide points of comparison for the future development of studio pedagogy in the Creative Industries. In each of the following pages, the interviews are placed alongside written descriptions of the units, their aims and outcomes, assessment models, and where possible photographs and video footage, as well as additional resources that may be useful to others engaged in studio teaching.
Resumo:
The integral diaphragm pressure transducer consists of a diaphragm machined from precipitation hardened martensitic (APX4) steel. Its performance is quite significant as it depends upon various factors such as mechanical properties including induced residual stress levels, metallurgical and physical parameters due to different stages of processing involved. Hence, the measurement and analysis of residual stress becomes very important from the point of in-service assessment of a component. In the present work, the stress measurements have been done using the X-ray diffraction (XRD) technique, which is a non-destructive test (NDT). This method is more reliable and widely used compared to the other NDT techniques. The metallurgical aspects have been studied by adopting the conventional metallographic practices including examination of microstructure using light microscope. The dimensional measurements have been carried out using dimensional gauge. The results of the present investigation reveals that the diaphragm material after undergoing series of realization processes has yielded good amount of retained austenite in it. Also, the presence of higher compressive stresses induced in the transducer results in non-linearity, zero shift and dimensional instability. The problem of higher retained austenite content and higher compressive stress have been overcome by adopting a new realization process involving machining and cold and hot stabilization soak which has brought down the retained austenite content to about 5–6% and acceptable level of compressive stress in the range −100 to −150 MPa with fine tempered martensitic phase structure and good dimensional stability. The new realization process seems to be quite effective in terms of controlling retained austenite content, residual stress, metallurgical phase as well as dimensional stability and this may result in minimum zero shift of the diaphragm system.
Resumo:
We compared student performance on large-scale take-home assignments and small-scale invigilated tests that require competency with exactly the same programming concepts. The purpose of the tests, which were carried out soon after the take home assignments were submitted, was to validate the students' assignments as individual work. We found widespread discrepancies between the marks achieved by students between the two types of tasks. Many students were able to achieve a much higher grade on the take-home assignments than the invigilated tests. We conclude that these paired assessments are an effective way to quickly identify students who are still struggling with programming concepts that we might otherwise assume they understand, given their ability to complete similar, yet more complicated, tasks in their own time. We classify these students as not yet being at the neo-Piagetian stage of concrete operational reasoning.
Resumo:
This study investigates the relationships between work stressors and organizational performance in terms of the quality of care provided by the long-term care facilities. Work stressors are first examined in relation to the unit's structural factors, resident characteristics, and to the unit specialization. The study is completed by an investigation into the associations of work stressors such as job demands or time pressure, role ambiguity, resident-related stress, and procedural injustice to organizational performance. Also the moderating effect of job control in the job demands organizational performance relationship is examined. The study was carried out in the National Research and Development Centre for Welfare and Health (STAKES). Survey data were drawn from 1194 nursing employees in 107 residential-home and health-center inpatient units in 1999 and from 977 employees in 91 units in 2002. Information on the unit resident characteristics and the quality of care was provided by the Resident Assessment Instrument (RAI). The results showed that large unit size or lower staffing levels were not consistently related to work stressors, whereas the impairments in residents' physical functioning in particular initiated stressful working conditions for employees. However, unit specialization into dementia and psychiatric residents was found to buffer the effects that the resident characteristics had on employee appraisals of work stressors, in that a high proportion of behavioral problems was related to less time pressure and role conflicts for employees in specialized units. Unit specialization was also related to improved team climates and the organizational commitment of employees. Work stressors associated with problems in care quality. Time pressure explained most of the differences between units in how the employees perceived the quality of physical and psychosocial care they provide for the residents. A high level of job demands in the unit was also found to be related to some increases in all clinical quality problems. High job control buffered the effects of job demands on the quality of care in terms of the use of restraints on elderly residents. Physical restraint and especially antipsychotic drug use were less prevalent in units that combined both high job demands and high control for employees. In contrast, in high strain units where heavy job demands coincided with a lack of control for employees, quality was poor in terms of the frequent use of physical restraints. In addition, procedural injustice was related to the frequent use of antianxiety of hypnotic drugs for elderly residents. The results suggest that both job control and procedural justice may have improved employees' abilities to cope when caring for the elderly residents, resulting in better organizational performance.
Resumo:
This article reports on a 6-year study that examined the association between pre-admission variables and field placement performance in an Australian bachelor of social work program (N=463). Very few of the pre-admission variables were found to be significantly associated with performance. These findings and the role of the admissions process are discussed. In addition to the usual academic criteria, the authors urge schools to include a focus on nonacademic criteria during the admissions process and the ongoing educational program.
Resumo:
Objective: To nationally trial the Primary Care Practice Improvement Tool (PC-PIT), an organisational performance improvement tool previously co-created with Australian primary care practices to increase their focus on relevant quality improvement (QI) activities. Design: The study was conducted from March to December 2015 with volunteer general practices from a range of Australian primary care settings. We used a mixed-methods approach in two parts. Part 1 involved staff in Australian primary care practices assessing how they perceived their practice met (or did not meet) each of the 13 PC-PIT elements of high-performing practices, using a 1–5 Likert scale. In Part 2, two external raters conducted an independent practice visit to independently and objectively assess the subjective practice assessment from Part 1 against objective indicators for the 13 elements, using the same 1–5 Likert scale. Concordance between the raters was determined by comparing their ratings. In-depth interviews conducted during the independent practice visits explored practice managers’ experiences and perceived support and resource needs to undertake organisational improvement in practice. Results: Data were available for 34 general practices participating in Part 1. For Part 2, independent practice visits and the inter-rater comparison were conducted for a purposeful sample of 19 of the 34 practices. Overall concordance between the two raters for each of the assessed elements was excellent. Three practice types across a continuum of higher- to lower-scoring practices were identified, with each using the PC-PIT in a unique way. During the in-depth interviews, practice managers identified benefits of having additional QI tools that relate to the PC-PIT elements. Conclusions: The PC-PIT is an organisational performance tool that is acceptable, valid and relevant to our range of partners and the end users (general practices). Work is continuing with our partners and end users to embed the PC-PIT in existing organisational improvement programs.
Resumo:
The Disability Standards for Education (2005) and the Australian Curriculum, Assessment and Reporting Authority relevant standards underscore the right of students with disability to access the curriculum on the same basis as students without disability. Students with disability are entitled to rigorous, relevant and engaging learning opportunities drawn from the Australian curriculum content. Taking this context into account, this paper provides a work-in-progress report on a two-year mathematics intervention project conducted in 12 special schools (Preparatory-Year 12) in Queensland, Australia. The project aims to build the capacity of teachers to teach mathematics to their students and to identify and make sense of the intervention program’s impact. It combines two approaches—appreciative inquiry and action research to monitor schools’ change processes. The interim findings demonstrated that teachers were concerned about their students’ underachievement in mathematics and that the multi-sensory forms of teaching advocated in the program increased student engagement and performance.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.
Resumo:
One of the foremost design considerations in microelectronics miniaturization is the use of embedded passives which provide practical solution. In a typical circuit, over 80 percent of the electronic components are passives such as resistors, inductors, and capacitors that could take up to almost 50 percent of the entire printed circuit board area. By integrating passive components within the substrate instead of being on the surface, embedded passives reduce the system real estate, eliminate the need for discrete and assembly, enhance electrical performance and reliability, and potentially reduce the overall cost. Moreover, it is lead free. Even with these advantages, embedded passive technology is at a relatively immature stage and more characterization and optimization are needed for practical applications leading to its commercialization.This paper presents an entire process from design and fabrication to electrical characterization and reliability test of embedded passives on multilayered microvia organic substrate. Two test vehicles focusing on resistors and capacitors have been designed and fabricated. Embedded capacitors in this study are made with polymer/ceramic nanocomposite (BaTiO3) material to take advantage of low processing temperature of polymers and relatively high dielectric constant of ceramics and the values of these capacitors range from 50 pF to 1.5 nF with capacitance per area of approximately 1.5 nF/cm(2). Limited high frequency measurement of these capacitors was performed. Furthermore, reliability assessments of thermal shock and temperature humidity tests based on JEDEC standards were carried out. Resistors used in this work have been of three types: 1) carbon ink based polymer thick film (PTF), 2) resistor foils with known sheet resistivities which are laminated to printed wiring board (PWB) during a sequential build-up (SBU) process and 3) thin-film resistor plating by electroless method. Realization of embedded resistors on conventional board-level high-loss epoxy (similar to 0.015 at 1 GHz) and proposed low-loss BCB dielectric (similar to 0.0008 at > 40 GHz) has been explored in this study. Ni-P and Ni-W-P alloys were plated using conventional electroless plating, and NiCr and NiCrAlSi foils were used for the foil transfer process. For the first time, Benzocyclobutene (BCB) has been proposed as a board level dielectric for advanced System-on-Package (SOP) module primarily due to its attractive low-loss (for RF application) and thin film (for high density wiring) properties.Although embedded passives are more reliable by eliminating solder joint interconnects, they also introduce other concerns such as cracks, delamination and component instability. More layers may be needed to accommodate the embedded passives, and various materials within the substrate may cause significant thermo -mechanical stress due to coefficient of thermal expansion (CTE) mismatch. In this work, numerical models of embedded capacitors have been developed to qualitatively examine the effects of process conditions and electrical performance due to thermo-mechanical deformations.Also, a prototype working product with the board level design including features of embedded resistors and capacitors are underway. Preliminary results of these are presented.
Resumo:
In this paper, we present self assessment schemes (SAS) for multiple agents performing a search mission on an unknown terrain. The agents are subjected to limited communication and sensor ranges. The agents communicate and coordinate with their neighbours to arrive at route decisions. The self assessment schemes proposed here have very low communication and computational overhead. The SAS also has attractive features like scalability to large number of agents and fast decision-making capability. SAS can be used with partial or complete information sharing schemes during the search mission. We validate the performance of SAS using simulation on a large search space consisting of 100 agents with different information structures and self assessment schemes. We also compare the results obtained using SAS with that of a previously proposed negotiation scheme. The simulation results show that the SAS is scalable to large number of agents and can perform as good as the negotiation schemes with reduced communication requirement (almost 20% of that required for negotiation).