201 resultados para Simulator of Performance in Error
Resumo:
INTRODUCTION In their target article, Yuri Hanin and Muza Hanina outlined a novel multidisciplinary approach to performance optimisation for sport psychologists called the Identification-Control-Correction (ICC) programme. According to the authors, this empirically-verified, psycho-pedagogical strategy is designed to improve the quality of coaching and consistency of performance in highly skilled athletes and involves a number of steps including: (i) identifying and increasing self-awareness of ‘optimal’ and ‘non-optimal’ movement patterns for individual athletes; (ii) learning to deliberately control the process of task execution; and iii), correcting habitual and random errors and managing radical changes of movement patterns. Although no specific examples were provided, the ICC programme has apparently been successful in enhancing the performance of Olympic-level athletes. In this commentary, we address what we consider to be some important issues arising from the target article. We specifically focus attention on the contentious topic of optimization in neurobiological movement systems, the role of constraints in shaping emergent movement patterns and the functional role of movement variability in producing stable performance outcomes. In our view, the target article and, indeed, the proposed ICC programme, would benefit from a dynamical systems theoretical backdrop rather than the cognitive scientific approach that appears to be advocated. Although Hanin and Hanina made reference to, and attempted to integrate, constructs typically associated with dynamical systems theoretical accounts of motor control and learning (e.g., Bernstein’s problem, movement variability, etc.), these ideas required more detailed elaboration, which we provide in this commentary.
Resumo:
As the first academically rigorous interrogation of the generation of performance within the global frame of the motion capture volume, this research presents a historical contextualisation and develops and tests a set of first principles through an original series of theoretically informed, practical exercises to guide those working in the emergent space of performance capture. It contributes a new understanding of the framing of performance in The Omniscient Frame, and initiates and positions performance capture as a new and distinct interdisciplinary discourse in the fields of theatre, animation, performance studies and film.
Resumo:
Driving is often nominated as problematic by individuals with chronic whiplash associated disorders (WAD), yet driving-related performance has not been evaluated objectively. The purpose of this study was to test driving-related performance in persons with chronic WAD against healthy controls of similar age, gender and driving experience to determine if driving-related performance in the WAD group was sufficiently impaired to recommend fitness to drive assessment. Driving-related performance was assessed using an advanced driving simulator during three driving scenarios; freeway, residential and a central business district (CBD). Total driving duration was approximately 15 min. Five driving tasks which could cause a collision (critical events) were included in the scenarios. In addition, the effect of divided attention (identify red dots projected onto side or rear view mirrors) was assessed three times in each scenario. Driving performance was measured using the simulator performance index (SPI) which is calculated from 12 measures. z-Scores for all SPI measures were calculated for each WAD subject based on mean values of the control subjects. The z-scores were then averaged for the WAD group. A z-score of ≤−2 indicated a driving failing grade in the simulator. The number of collisions over the five critical events was compared between the WAD and control groups as was reaction time and missed response ratio in identifying the red dots. Seventeen WAD and 26 control subjects commenced the driving assessment. Demographic data were comparable between the groups. All subjects completed the freeway scenario but four withdrew during the residential and eight during the CBD scenario because of motion sickness. All scenarios were completed by 14 WAD and 17 control subjects. Mean z-scores for the SPI over the three scenarios was statistically lower in the WAD group (−0.3 ± 0.3; P < 0.05) but the score was not below the cut-off point for safe driving. There were no differences in the reaction time and missed response ratio in divided attention tasks between the groups (All P > 0.05). Assessment of driving in an advanced driving simulator for approximately 15 min revealed that driving-related performance in chronic WAD was not sufficiently impaired to recommend the need for fitness to drive assessment.
Resumo:
This study is the first to investigate the effect of prolonged reading on reading performance and visual functions in students with low vision. The study focuses on one of the most common modes of achieving adequate magnification for reading by students with low vision, their close reading distance (proximal or relative distance magnification). Close reading distances impose high demands on near visual functions, such as accommodation and convergence. Previous research on accommodation in children with low vision shows that their accommodative responses are reduced compared to normal vision. In addition, there is an increased lag of accommodation for higher stimulus levels as may occur at close reading distance. Reduced accommodative responses in low vision and higher lag of accommodation at close reading distances together could impact on reading performance of students with low vision especially during prolonged reading tasks. The presence of convergence anomalies could further affect reading performance. Therefore, the aims of the present study were 1) To investigate the effect of prolonged reading on reading performance in students with low vision 2) To investigate the effect of prolonged reading on visual functions in students with low vision. This study was conducted as cross-sectional research on 42 students with low vision and a comparison group of 20 students with normal vision, aged 7 to 20 years. The students with low vision had vision impairments arising from a range of causes and represented a typical group of students with low vision, with no significant developmental delays, attending school in Brisbane, Australia. All participants underwent a battery of clinical tests before and after a prolonged reading task. An initial reading-specific history and pre-task measurements that included Bailey-Lovie distance and near visual acuities, Pelli-Robson contrast sensitivity, ocular deviations, sensory fusion, ocular motility, near point of accommodation (pull-away method), accuracy of accommodation (Monocular Estimation Method (MEM)) retinoscopy and Near Point of Convergence (NPC) (push-up method) were recorded for all participants. Reading performance measures were Maximum Oral Reading Rates (MORR), Near Text Visual Acuity (NTVA) and acuity reserves using Bailey-Lovie text charts. Symptoms of visual fatigue were assessed using the Convergence Insufficiency Symptom Survey (CISS) for all participants. Pre-task measurements of reading performance and accuracy of accommodation and NPC were compared with post-task measurements, to test for any effects of prolonged reading. The prolonged reading task involved reading a storybook silently for at least 30 minutes. The task was controlled for print size, contrast, difficulty level and content of the reading material. Silent Reading Rate (SRR) was recorded every 2 minutes during prolonged reading. Symptom scores and visual fatigue scores were also obtained for all participants. A visual fatigue analogue scale (VAS) was used to assess visual fatigue during the task, once at the beginning, once at the middle and once at the end of the task. In addition to the subjective assessments of visual fatigue, tonic accommodation was monitored using a photorefractor (PlusoptiX CR03™) every 6 minutes during the task, as an objective assessment of visual fatigue. Reading measures were done at the habitual reading distance of students with low vision and at 25 cms for students with normal vision. The initial history showed that the students with low vision read for significantly shorter periods at home compared to the students with normal vision. The working distances of participants with low vision ranged from 3-25 cms and half of them were not using any optical devices for magnification. Nearly half of the participants with low vision were able to resolve 8-point print (1M) at 25 cms. Half of the participants in the low vision group had ocular deviations and suppression at near. Reading rates were significantly reduced in students with low vision compared to those of students with normal vision. In addition, there were a significantly larger number of participants in the low vision group who could not sustain the 30-minute task compared to the normal vision group. However, there were no significant changes in reading rates during or following prolonged reading in either the low vision or normal vision groups. Individual changes in reading rates were independent of their baseline reading rates, indicating that the changes in reading rates during prolonged reading cannot be predicted from a typical clinical assessment of reading using brief reading tasks. Contrary to previous reports the silent reading rates of the students with low vision were significantly lower than their oral reading rates, although oral and silent reading was assessed using different methods. Although the visual acuity, contrast sensitivity, near point of convergence and accuracy of accommodation were significantly poorer for the low vision group compared to those of the normal vision group, there were no significant changes in any of these visual functions following prolonged reading in either group. Interestingly, a few students with low vision (n =10) were found to be reading at a distance closer than their near point of accommodation. This suggests a decreased sensitivity to blur. Further evaluation revealed that the equivalent intrinsic refractive errors (an estimate of the spherical dioptirc defocus which would be expected to yield a patient’s visual acuity in normal subjects) were significantly larger for the low vision group compared to those of the normal vision group. As expected, accommodative responses were significantly reduced for the low vision group compared to the expected norms, which is consistent with their close reading distances, reduced visual acuity and contrast sensitivity. For those in the low vision group who had an accommodative error exceeding their equivalent intrinsic refractive errors, a significant decrease in MORR was found following prolonged reading. The silent reading rates however were not significantly affected by accommodative errors in the present study. Suppression also had a significant impact on the changes in reading rates during prolonged reading. The participants who did not have suppression at near showed significant decreases in silent reading rates during and following prolonged reading. This impact of binocular vision at near on prolonged reading was possibly due to the high demands on convergence. The significant predictors of MORR in the low vision group were age, NTVA, reading interest and reading comprehension, accounting for 61.7% of the variances in MORR. SRR was not significantly influenced by any factors, except for the duration of the reading task sustained; participants with higher reading rates were able to sustain a longer reading duration. In students with normal vision, age was the only predictor of MORR. Participants with low vision also reported significantly greater visual fatigue compared to the normal vision group. Measures of tonic accommodation however were little influenced by visual fatigue in the present study. Visual fatigue analogue scores were found to be significantly associated with reading rates in students with low vision and normal vision. However, the patterns of association between visual fatigue and reading rates were different for SRR and MORR. The participants with low vision with higher symptom scores had lower SRRs and participants with higher visual fatigue had lower MORRs. As hypothesized, visual functions such as accuracy of accommodation and convergence did have an impact on prolonged reading in students with low vision, for students whose accommodative errors were greater than their equivalent intrinsic refractive errors, and for those who did not suppress one eye. Those students with low vision who have accommodative errors higher than their equivalent intrinsic refractive errors might significantly benefit from reading glasses. Similarly, considering prisms or occlusion for those without suppression might reduce the convergence demands in these students while using their close reading distances. The impact of these prescriptions on reading rates, reading interest and visual fatigue is an area of promising future research. Most importantly, it is evident from the present study that a combination of factors such as accommodative errors, near point of convergence and suppression should be considered when prescribing reading devices for students with low vision. Considering these factors would also assist rehabilitation specialists in identifying those students who are likely to experience difficulty in prolonged reading, which is otherwise not reflected during typical clinical reading assessments.
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.
Resumo:
This report fully summarises a project designed to enhance commercial real estate performance within both operational and investment contexts through the development of a model aimed at supporting improved decision-making. The model is based on a risk adjusted discounted cash flow, providing a valuable toolkit for building managers, owners, and potential investors for evaluating individual building performance in terms of financial, social and environmental criteria over the complete life-cycle of the asset. The ‘triple bottom line’ approach to the evaluation of commercial property has much significance for the administrators of public property portfolios in particular. It also has applications more generally for the wider real estate industry given that the advent of ‘green’ construction requires new methods for evaluating both new and existing building stocks. The research is unique in that it focuses on the accuracy of the input variables required for the model. These key variables were largely determined by market-based research and an extensive literature review, and have been fine-tuned with extensive testing. In essence, the project has considered probability-based risk analysis techniques that required market-based assessment. The projections listed in the partner engineers’ building audit reports of the four case study buildings were fed into the property evaluation model developed by the research team. The results are strongly consistent with previously existing, less robust evaluation techniques. And importantly, this model pioneers an approach for taking full account of the triple bottom line, establishing a benchmark for related research to follow. The project’s industry partners expressed a high degree of satisfaction with the project outcomes at a recent demonstration seminar. The project in its existing form has not been geared towards commercial applications but it is anticipated that QDPW and other industry partners will benefit greatly by using this tool for the performance evaluation of property assets. The project met the objectives of the original proposal as well as all the specified milestones. The project has been completed within budget and on time. This research project has achieved the objective by establishing research foci on the model structure, the key input variable identification, the drivers of the relevant property markets, the determinants of the key variables (Research Engine no.1), the examination of risk measurement, the incorporation of risk simulation exercises (Research Engine no.2), the importance of both environmental and social factors and, finally the impact of the triple bottom line measures on the asset (Research Engine no. 3).
Resumo:
In Australia, between 1994 and 2000, 50 construction workers were killed each year as a result of their work, the industry fatality rate, at 10.4 per 100,000 persons, is similar to the national road toll fatality rate and the rate of serious injury is 50% higher than the all industries average. This poor performance represents a significant threat to the industry’s social sustainability. Despite the best efforts of regulators and policy makers at both State and Federal levels, the incidence of death, injury and illness in the Australian construction industry has remained intransigently high, prompting an industry-led initiative to improve the occupational health and safety (OHS) performance of the Australian construction industry. The ‘Safer Construction’ project involves the development of an evidence-based Voluntary Code of Practice for OHS in the industry.
Resumo:
Tested D. J. Kavanagh's (1983) depression model's explanation of response to cognitive-behavioral treatment among 19 20–60 yr old Ss who received treatment and 24 age-matched Ss who were assigned to a waiting list. Measures included the Beck Depression Inventory and self-efficacy (SE) and self-monitoring scales. Rises in SE and self-monitored performance of targeted skills were closely associated with the improved depression scores of treated Ss. Improvements in the depression of waiting list Ss occurred through random, uncontrolled events rather than via a systematic increase in specific skills targeted in treatment. SE regarding assertion also predicted depression scores over a 12-wk follow-up.
Resumo:
Research on expertise, talent identification and development has tended to be mono-disciplinary, typically adopting adopting neurogenetic deterministic or environmentalist positions, with an over-riding focus on operational issues. In this paper the validity of dualist positions on sport expertise is evaluated. It is argued that, to advance understanding of expertise and talent development, a shift towards a multi-disciplinary and integrative science focus is necessary, along with the development of a comprehensive multi-disciplinary theoretical rationale. Here we elucidate dynamical systems theory as a multi-disciplinary theoretical rationale for capturing how multiple interacting constraints can shape the development of expert performers. This approach suggests that talent development programmes should eschew the notion of common optimal performance models, emphasise the individual nature of pathways to expertise, and identify the range of interacting constraints that impinge on performance potential of individual athletes, rather than evaluating current performance on physical tests referenced to group norms.
Resumo:
Emerging market importers are increasingly engaging in relationships with foreign suppliers. Nevertheless, characteristics of the institutional and cultural environments of countries may affect relationship behaviour. Furthermore research on relationship marketing primarily focuses on the marketing activities of exporters from developed countries and much less attention is paid to the import side of the exchange process. Thus, the objective of this study is to empirically examine importer relationship performance in a Latin America context. This article proposes and tests a conceptual model that includes the antecedents and outcomes of trust and commitment with a survey of Chilean importers. The model uses confirmatory factor analysis (CFA) to develop the construct measures and structural equation modelling (SEMS) to test the model. The findings of this study contribute to a better understanding of the driving forces of trust and commitments and their influence on importing firms' performance in an emerging market context.