735 resultados para Productive performance
Resumo:
With the projected increase in older adults, the older driver population is estimated to be the fastest growing cohort of drivers among many developed countries. The increased physical fragility associated with the aging process make older adults who drive private automobiles a vulnerable road user group. Much of the current research on older drivers’ behaviours and practices rely on self-report data. This paper explores the utility of in-vehicle devices (Global Positioning Systems and recording accelerometers) in assessing older drivers’ habitual driving behaviours. Seventy-eight older drivers (above 65 years of age), from the Australian Capital Territory, Australia, participated in the current study. The driving behaviours and practices of these participants were prospectively assessed over a two-week period. The use of combined GPS and recording accelerometers to improve understanding of older drivers’ driving behaviours show promise within the current study. The challenges of using multiple in-vehicle devices in assessing driving beahaviours and performances within this cohort will be discussed. Based on the current findings, recommendations for future research regarding the use of in-vehicle devices among the older driver cohort are proposed.
Resumo:
Safety concerns in the operation of autonomous aerial systems require safe-landing protocols be followed during situations where the a mission should be aborted due to mechanical or other failure. On-board cameras provide information that can be used in the determination of potential landing sites, which are continually updated and ranked to prevent injury and minimize damage. Pulse Coupled Neural Networks have been used for the detection of features in images that assist in the classification of vegetation and can be used to minimize damage to the aerial vehicle. However, a significant drawback in the use of PCNNs is that they are computationally expensive and have been more suited to off-line applications on conventional computing architectures. As heterogeneous computing architectures are becoming more common, an OpenCL implementation of a PCNN feature generator is presented and its performance is compared across OpenCL kernels designed for CPU, GPU and FPGA platforms. This comparison examines the compute times required for network convergence under a variety of images obtained during unmanned aerial vehicle trials to determine the plausibility for real-time feature detection.
Resumo:
Coordination of dynamic interceptive movements is predicated on cyclical relations between an individual's actions and information sources from the performance environment. To identify dynamic informational constraints, which are interwoven with individual and task constraints, coaches’ experiential knowledge provides a complementary source to support empirical understanding of performance in sport. In this study, 15 expert coaches from 3 sports (track and field, gymnastics and cricket) participated in a semi-structured interview process to identify potential informational constraints which they perceived to regulate action during run-up performance. Expert coaches’ experiential knowledge revealed multiple information sources which may constrain performance adaptations in such locomotor pointing tasks. In addition to the locomotor pointing target, coaches’ knowledge highlighted two other key informational constraints: vertical reference points located near the locomotor pointing target and a check mark located prior to the locomotor pointing target. This study highlights opportunities for broadening the understanding of perception and action coupling processes, and the identified information sources warrant further empirical investigation as potential constraints on athletic performance. Integration of experiential knowledge of expert coaches with theoretically driven empirical knowledge represents a promising avenue to drive future applied science research and pedagogical practice.
Resumo:
Over the last 40 years, the term mentoring has been hailed as an important workplace learning activity, and applied in a variety on contexts such as government departments, hospitals, schools and community settings. It has been used to support the learning and development of new employees and leaders, as well as for the purposes of talent management and retention. Not surprisingly, its meaning often depends on the purpose for which it has been used and the particular context in which it has been applied. Most adults can identify a person who has had a major positive impact on their lives, e.g. a boss, a coach or a teacher, who has acted as a mentor to them. Today, organisations are embracing the concept of mentoring as a professional development too through which improvements in efficiency, productivity and the passing of corporate knowledge and leadership skills can occur.
Resumo:
Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.
Resumo:
The ways we assume, observe and model “presence” and its effects are the focus in this paper. Entities with selectively shared presences are the basis of any collective, and of attributions (such as “humorous”, “efficient” or “intelligent”). The subtleties of any joint presence can markedly influence potentials, perceptions and performance of the collective as demonstrated when a humorous tale is counterpoised with disciplined thought. Disciplines build on presences assumed known or knowable while fluid and interpretable presences pervade humor. Explorations in this paper allow considerations of collectives, causality and the philosophy of computing. Economics has long considered issues of collective action in ways circumscribed by assumptions about the presence of economic entities. Such entities are deemed rational but they are clearly not intelligent. To reach its potential, collective intelligence research needs more adequate considerations of alternate presences and their impacts.
Resumo:
This paper proposes to adopt data envelopment analysis (DEA) based Malmquist total factor productivity (TFP) indices methods to evaluate the effect of mergers and acquisitions (M&As) on acquirers in short-term and long-term window. Based on analyzing 32 M&A deals conducted by Chinese real estate firms from 2000-2011, the study result demonstrate that the effect of M&A on developers’ performance is positive. Through M&A, the developers’ Malmquist TFP experienced a steady growth; their technology has got noticeable progress immediately after acquisition; and their technical efficiency has suffered a slight decrease in short-term after acquisition, but then achieved marked increase in the long-term when realization of integration and synergy. However, there is no evidence that the real estate firms have achieved scale efficiency improvement after M&A in either short-term or long-term.
Resumo:
This study investigates the role of environmental dynamics (i.e., market turbulence) as a factor influencing an organisation’s top management temporal orientation, and the impact of temporal orientation on innovative and financial performance. Results show that firm’s operating in highly turbulent markets exhibit higher degrees of future orientation, as opposed to present orientation. Future-oriented (rather than present-oriented) firms also experience higher levels of both incremental and radical innovations, which in turn generate financial performance. The study highlights the important role of shared strategic mindset (which is contextually influenced) as a driving factor behind the firm innovative and financial performance.
Resumo:
Purpose – The purpose of this paper is to develop an effective methodology for implementing lean manufacturing strategies and a leanness evaluation metric using continuous performance measurement (CPM). Design/methodology/approach – Based on five lean principles, a systematic lean implementation methodology for manufacturing organizations has been proposed. A simplified leanness evaluation metric consisting of both efficiency and effectiveness attributes of manufacturing performance has been developed for continuous evaluation of lean implementation. A case study to validate the proposed methodology has been conducted and proposed CPM metric has been used to assess the manufacturing leanness. Findings – Proposed methodology is able to systematically identify manufacturing wastes, select appropriate lean tools, identify relevant performance indicators, achieve significant performance improvement and establish lean culture in the organization. Continuous performance measurement matrices in terms of efficiency and effectiveness are proved to be appropriate methods for continuous evaluation of lean performance. Research limitations/implications – Effectiveness of the method developed has been demonstrated by applying it in a real life assembly process. However, more tests/applications will be necessary to generalize the findings. Practical implications – Results show that applying the methods developed, managers can successfully identify and remove manufacturing wastes from their production processes. By improving process efficiency, they can optimize their resource allocations. Manufacturers now have a validated step by step methodology for successfully implementing lean strategies. Originality/value – According to the authors’ best knowledge, this is the first known study that proposed a systematic lean implementation methodology based on lean principles and continuous improvement techniques. Evaluation of performance improvement by lean strategies is a critical issue. This study develops a simplified leanness evaluation metric considering both efficiency and effectiveness attributes and integrates it with the lean implementation methodology.
Resumo:
Introduction Sleep restriction and missing 1 night’s continuous positive air pressure (CPAP) treatment are scenarios faced by obstructive sleep apnoea (OSA) patients, who must then assess their own fitness to drive. This study aims to assess the impact of this on driving performance. Method 11 CPAP treated participants (50–75 yrs), drove an interactive car simulator under monotonous motorway conditions for 2 hours on 3 afternoons, following;(i)normal night’s sleep (average 8.2 h) with CPAP (ii) sleep restriction (5 h), with CPAP (iii)normal length of sleep, without CPAP. Driving incidents were noted if the car came out of the designated driving lane. EEG was recorded continually and KSS reported every 200 seconds. Results Driving incidents: Incidents were more prevalent following CPAP withdrawal during hour 1, demonstrating a significant condition time interaction [F(6,60) = 3.40, p = 0.006]. KSS: At the start of driving participants felt sleepiest following CPAP withdrawal, by the end of the task KSS levels were similar following CPAP withdrawal and sleep restriction, demonstrating a significant condition, time interaction [F(3.94,39.41) = 3.39, p = 0.018]. EEG: There was a non significant trend for combined alpha and theta activity to be highest throughout the drive following CPAP withdrawal. Discussion CPAP withdrawal impairs driving simulator performance sooner than restricting sleep to 5 h with CPAP. Participants had insight into this increased sleepiness reflected by the higher KSS reported following CPAP withdrawal. In the practical terms of driving any one incident could be fatal. The earlier impairment reported here demonstrates the potential danger of missing CPAP treatment and highlights the benefit of CPAP treatment even when sleep time is short.
Resumo:
Light absorption efficiency of heterogeneous catalysts has restricted their photocatalytic capability for commercially important organic synthesis. Here, we report a way of harvesting visible light efficiently to boost zeolite catalysis by means of plasmonic gold nanoparticles (Au-NPs) supported on zeolites. Zeolites possess strong Brønsted acids and polarized electric fields created by extra-framework cations. The polarized electric fields can be further intensified by the electric near-field enhancement of Au-NPs, which results from the localized surface plasmon resonance (LSPR) upon visible light irradiation. The acetalization reaction was selected as a showcase performed on MZSM-5 and Au/MZSM-5 (M = H+, Na+, Ca2+, or La3+). The density functional theory (DFT) calculations confirmed that the intensified polarized electric fields played a critical role in stretching the C = O bond of the reactants of benzaldehyde to enlarge their molecular polarities, thus allowing reactants to be activated more efficiently by catalytic centers so as to boost the reaction rates. This discovery should evoke intensive research interest on plasmonic metals and diverse zeolites with an aim to take advantage of sunlight for plasmonic devices, molecular electronics, energy storage, and catalysis.
Resumo:
In many active noise control (ANC) applications, an online secondary path modelling method that uses a white noise as a training signal is required. This paper proposes a new feedback ANC system. Here we modified both the FxLMS and the VSS-LMS algorithms to raised noise attenuation and modelling accuracy for the overall system. The proposed algorithm stops injection of the white noise at the optimum point and reactivate the injection during the operation, if needed, to maintain performance of the system. Preventing continuous injection of the white noise increases the performance of the proposed method significantly and makes it more desirable for practical ANC systems. Computer simulation results shown in this paper indicate effectiveness of the proposed method.
Resumo:
In this age of rapidly evolving technology, teachers are encouraged to adopt ICTs by government, syllabus, school management, and parents. Indeed, it is an expectation that teachers will incorporate technologies into their classroom teaching practices to enhance the learning experiences and outcomes of their students. In particular, regarding the science classroom, a subject that traditionally incorporates hands-on experiments and practicals, the integration of modern technologies should be a major feature. Although myriad studies report on technologies that enhance students’ learning outcomes in science, there is a dearth of literature on how teachers go about selecting technologies for use in the science classroom. Teachers can feel ill prepared to assess the range of available choices and might feel pressured and somewhat overwhelmed by the avalanche of new developments thrust before them in marketing literature and teaching journals. The consequences of making bad decisions are costly in terms of money, time and teacher confidence. Additionally, no research to date has identified what technologies science teachers use on a regular basis, and whether some purchased technologies have proven to be too problematic, preventing their sustained use and possible wider adoption. The primary aim of this study was to provide research-based guidance to teachers to aid their decision-making in choosing technologies for the science classroom. The study unfolded in several phases. The first phase of the project involved survey and interview data from teachers in relation to the technologies they currently use in their science classrooms and the frequency of their use. These data were coded and analysed using Grounded Theory of Corbin and Strauss, and resulted in the development of a PETTaL model that captured the salient factors of the data. This model incorporated usability theory from the Human Computer Interaction literature, and education theory and models such as Mishra and Koehler’s (2006) TPACK model, where the grounded data indicated these issues. The PETTaL model identifies Power (school management, syllabus etc.), Environment (classroom / learning setting), Teacher (personal characteristics, experience, epistemology), Technology (usability, versatility etc.,) and Learners (academic ability, diversity, behaviour etc.,) as fields that can impact the use of technology in science classrooms. The PETTaL model was used to create a Predictive Evaluation Tool (PET): a tool designed to assist teachers in choosing technologies, particularly for science teaching and learning. The evolution of the PET was cyclical (employing agile development methodology), involving repeated testing with in-service and pre-service teachers at each iteration, and incorporating their comments i ii in subsequent versions. Once no new suggestions were forthcoming, the PET was tested with eight in-service teachers, and the results showed that the PET outcomes obtained by (experienced) teachers concurred with their instinctive evaluations. They felt the PET would be a valuable tool when considering new technology, and it would be particularly useful as a means of communicating perceived value between colleagues and between budget holders and requestors during the acquisition process. It is hoped that the PET could make the tacit knowledge acquired by experienced teachers about technology use in classrooms explicit to novice teachers. Additionally, the PET could be used as a research tool to discover a teachers’ professional development needs. Therefore, the outcomes of this study can aid a teacher in the process of selecting educationally productive and sustainable new technology for their science classrooms. This study has produced an instrument for assisting teachers in the decision-making process associated with the use of new technologies for the science classroom. The instrument is generic in that it can be applied to all subject areas. Further, this study has produced a powerful model that extends the TPACK model, which is currently extensively employed to assess teachers’ use of technology in the classroom. The PETTaL model grounded in data from this study, responds to the calls in the literature for TPACK’s further development. As a theoretical model, PETTaL has the potential to serve as a framework for the development of a teacher’s reflective practice (either self evaluation or critical evaluation of observed teaching practices). Additionally, PETTaL has the potential for aiding the formulation of a teacher’s personal professional development plan. It will be the basis for further studies in this field.
Resumo:
Collaborative infrastructure projects use hybrid formal and informal governance structures to manage transactions. Based on previous desk-top research, the authors identified the key mechanisms underlying project governance, and posited the performance implications of the governance (Chen et al. 2012). The current paper extends that qualitative research by testing the veracity of those findings using data from 320 Australian construction organisations. The results provide, for the first time, reliable and valid scales to measure governance and performance of collaborative projects, and the relationship between them. The results confirm seven of seven hypothesised governance mechanisms; 30 of 43 hypothesised underlying actions; eight of eight hypothesised key performance indicators; and the dual importance of formal and informal governance. A startling finding of the study was that the implementation intensity of informal mechanisms (non-contractual conditions) is a greater predictor of project performance variance than that of formal mechanisms (contractual conditions). Further, contractual conditions do not directly impact project performance; instead their impact is mediated by the non-contractual features of a project. Obligations established under the contract are not sufficient to optimise project performance.