903 resultados para night vision system
Resumo:
Purpose: The cornea is known to be susceptible to forces exerted by eyelids. There have been previous attempts to quantify eyelid pressure but the reliability of the results is unclear. The purpose of this study was to develop a technique using piezoresistive pressure sensors to measure upper eyelid pressure on the cornea. Methods: The technique was based on the use of thin (0.18 mm) tactile piezoresistive pressure sensors, which generate a signal related to the applied pressure. A range of factors that influence the response of this pressure sensor were investigated along with the optimal method of placing the sensor in the eye. Results: Curvature of the pressure sensor was found to impart force, so the sensor needed to remain flat during measurements. A large rigid contact lens was designed to have a flat region to which the sensor was attached. To stabilise the contact lens during measurement, an apparatus was designed to hold and position the sensor and contact lens combination on the eye. A calibration system was designed to apply even pressure to the sensor when attached to the contact lens, so the raw digital output could be converted to actual pressure units. Conclusions: Several novel procedures were developed to use tactile sensors to measure eyelid pressure. The quantification of eyelid pressure has a number of applications including eyelid reconstructive surgery and the design of soft and rigid contact lenses.
Resumo:
Purpose: There have been few studies of visual temporal processing of myopic eyes. This study investigated the visual performance of emmetropic and myopic eyes using a backward visual masking location task. Methods: Data were collected for 39 subjects (15 emmetropes, 12 stable myopes, 12 progressing myopes). In backward visual masking, a target’s visibility is reduced by a mask presented in quick succession ‘after’ the target. The target and mask stimuli were presented at different interstimulus intervals (from 12 to 300 ms). The task involved locating the position of a target letter with both a higher (seven per cent) and a lower (five per cent) contrast. Results: Emmetropic subjects had significantly better performance for the lower contrast location task than the myopes (F2,36 = 22.88; p < 0.001) but there was no difference between the progressing and stable myopic groups (p = 0.911). There were no differences between the groups for the higher contrast location task (F2,36 = 0.72, p = 0.495). No relationship between task performance and either the magnitude of myopia or axial length was found for either task. Conclusions: A location task deficit was observed in myopes only for lower contrast stimuli. Both emmetropic and myopic groups had better performance for the higher contrast task compared to the lower contrast task, with myopes showing considerable improvement. This suggests that five per cent contrast may be the contrast threshold required to bias the task towards the magnocellular system (where myopes have a temporal processing deficit). Alternatively, the task may be sensitive to the contrast sensitivity of the observer.
Resumo:
Purpose: To compare the eye and head movements and lane-keeping of drivers with hemianopia and quadrantanopia with that of age-matched controls when driving under real world conditions. Methods: Participants included 22 hemianopes and 8 quadrantanopes (M age 53 yrs) and 30 persons with normal visual fields (M age 52 yrs) who were ≥ 6 months from the brain injury date and either a current driver or aiming to resume driving. All participants drove an instrumented dual-brake vehicle along a 14-mile route in traffic that included non-interstate city driving and interstate driving. Driving performance was scored using a standardised assessment system by two “backseat” raters and the Vigil Vanguard system which provides objective measures of speed, braking and acceleration, cornering, and video-based footage from which eye and head movements and lane-keeping can be derived. Results: As compared to drivers with normal visual fields, drivers with hemianopia or quadrantanopia on average were significantly more likely to drive slower, to exhibit less excessive cornering forces or acceleration, and to execute more shoulder movements off the seat. Those hemianopic and quadrantanopic drivers rated as safe to drive by the backseat evaluator made significantly more excursive eye movements, exhibited more stable lane positioning, less sudden braking events and drove at higher speeds than those rated as unsafe, while there was no difference between safe and unsafe drivers in head movements. Conclusions: Persons with hemianopic and quadrantanopic field defects rated as safe to drive have different driving characteristics compared to those rated as unsafe when assessed using objective measures of driving performance.
Resumo:
Purpose: To investigate whether wearing different presbyopic vision corrections alters the pattern of eye and head movements when viewing dynamic driving-related traffic scenes. Methods: Participants included 20 presbyopes (mean age: 56±5.7 years) who had no experience of wearing presbyopic vision corrections (i.e. all were single vision wearers). Eye and head movements were recorded while wearing five different vision corrections: single vision lenses (SV), progressive addition spectacle lenses (PALs), bifocal spectacle lenses (BIF), monovision (MV) and multifocal contact lenses (MTF CL) in random order. Videotape recordings of traffic scenes of suburban roads and expressways (with edited targets) were presented as dynamic driving-related stimuli and digital numeric display panels included as near visual stimuli (simulating speedometer and radio). Eye and head movements were recorded using the faceLAB™ system and the accuracy of target identification was also recorded. Results: The magnitude of eye movements while viewing the driving-related traffic scenes was greater when wearing BIF and PALs than MV and MTF CL (p≤0.013). The magnitude of head movements was greater when wearing SV, BIF and PALs than MV and MTF CL (p<0.0001) and the number of saccades was significantly higher for BIF and PALs than MV (p≤0.043). Target recognition accuracy was poorer for all vision corrections when the near stimulus was located at eccentricities inferiorly and to the left, rather than directly below the primary position of gaze (p=0.008), and PALs gave better performance than MTF CL (p=0.043). Conclusions: Different presbyopic vision corrections alter eye and head movement patterns. In particular, the larger magnitude of eye and head movements and greater number of saccades associated with the spectacle presbyopic corrections, may impact on driving performance.
Resumo:
Fire design is an essential element of the overall design procedure of structural steel members and systems. Conventionally the fire rating of load-bearing stud wall systems made of light gauge steel frames (LSF) is based on approximate prescriptive methods developed on the basis of limited fire tests. This design is limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to the stud walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these light gauge steel stud wall systems. Hence a detailed fire research study into the performance and effectiveness of a recently developed innovative composite panel wall system was undertaken at Queensland University of Technology using both full scale fire tests and numerical studies. Experimental results of LSF walls using the new composite panels under axial compression load have shown the improvement in fire performance and fire resistance rating. Numerical analyses are currently being undertaken using the finite element program ABAQUS. Measured temperature profiles of the studs are used in the numerical models and the results are used to calibrate against full scale test results. The validated model will be used in a detailed parametric study with an aim to develop suitable design rules within the current cold-formed steel structures and fire design standards. This paper will present the results of experimental and numerical investigations into the structural and fire behaviour of light gauge steel stud walls protected by the new composite panel. It will demonstrate the improvements provided by the new composite panel system in comparison to traditional wall systems.
Resumo:
This paper has two main sections, the first of which presents a summarized review of the literature concerning previous studies on the implementation of ISO 9000 quality management systems (QMSs) both in global construction companies as well as in Indonesian construction firms, and the perceived correlation between organisational culture and QMS practices in the construction sector. The first section of the paper contributes to the development of the second section, which presents details of the research project being undertaken. Based on the fundamental questions that led to the development of the main research objectives, suitable research methods have been developed in order to meet these objectives. Primary data will be collected by use of a mixed methods approach, i.e., questionnaire surveys and focus group discussions/interviews in order to obtain opinions from respondents drawn from targeted ISO construction firms. Most of the data expected to be obtained will be in future be analyzed using statistical software then the findings will be discussed in order to ultimately develop a culture-based QMS framework.
Resumo:
Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.
Resumo:
Road curves are an important feature of road infrastructure and many serious crashes occur on road curves. In Queensland, the number of fatalities is twice as many on curves as that on straight roads. Therefore, there is a need to reduce drivers’ exposure to crash risk on road curves. Road crashes in Australia and in the Organisation for Economic Co-operation and Development(OECD) have plateaued in the last five years (2004 to 2008) and the road safety community is desperately seeking innovative interventions to reduce the number of crashes. However, designing an innovative and effective intervention may prove to be difficult as it relies on providing theoretical foundation, coherence, understanding, and structure to both the design and validation of the efficiency of the new intervention. Researchers from multiple disciplines have developed various models to determine the contributing factors for crashes on road curves with a view towards reducing the crash rate. However, most of the existing methods are based on statistical analysis of contributing factors described in government crash reports. In order to further explore the contributing factors related to crashes on road curves, this thesis designs a novel method to analyse and validate these contributing factors. The use of crash claim reports from an insurance company is proposed for analysis using data mining techniques. To the best of our knowledge, this is the first attempt to use data mining techniques to analyse crashes on road curves. Text mining technique is employed as the reports consist of thousands of textual descriptions and hence, text mining is able to identify the contributing factors. Besides identifying the contributing factors, limited studies to date have investigated the relationships between these factors, especially for crashes on road curves. Thus, this study proposed the use of the rough set analysis technique to determine these relationships. The results from this analysis are used to assess the effect of these contributing factors on crash severity. The findings obtained through the use of data mining techniques presented in this thesis, have been found to be consistent with existing identified contributing factors. Furthermore, this thesis has identified new contributing factors towards crashes and the relationships between them. A significant pattern related with crash severity is the time of the day where severe road crashes occur more frequently in the evening or night time. Tree collision is another common pattern where crashes that occur in the morning and involves hitting a tree are likely to have a higher crash severity. Another factor that influences crash severity is the age of the driver. Most age groups face a high crash severity except for drivers between 60 and 100 years old, who have the lowest crash severity. The significant relationship identified between contributing factors consists of the time of the crash, the manufactured year of the vehicle, the age of the driver and hitting a tree. Having identified new contributing factors and relationships, a validation process is carried out using a traffic simulator in order to determine their accuracy. The validation process indicates that the results are accurate. This demonstrates that data mining techniques are a powerful tool in road safety research, and can be usefully applied within the Intelligent Transport System (ITS) domain. The research presented in this thesis provides an insight into the complexity of crashes on road curves. The findings of this research have important implications for both practitioners and academics. For road safety practitioners, the results from this research illustrate practical benefits for the design of interventions for road curves that will potentially help in decreasing related injuries and fatalities. For academics, this research opens up a new research methodology to assess crash severity, related to road crashes on curves.
Resumo:
We propose to design a Custom Learning System that responds to the unique needs and potentials of individual students, regardless of their location, abilities, attitudes, and circumstances. This project is intentionally provocative and future-looking but it is not unrealistic or unfeasible. We propose that by combining complex learning databases with a learner’s personal data, we could provide all students with a personal, customizable, and flexible education. This paper presents the initial research undertaken for this project of which the main challenges were to broadly map the complex web of data available, to identify what logic models are required to make the data meaningful for learning, and to translate this knowledge into simple and easy-to-use interfaces. The ultimate outcome of this research will be a series of candidate user interfaces and a broad system logic model for a new smart system for personalized learning. This project is student-centered, not techno-centric, aiming to deliver innovative solutions for learners and schools. It is deliberately future-looking, allowing us to ask questions that take us beyond the limitations of today to motivate new demands on technology.
Resumo:
Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.
Resumo:
This study is the first to investigate the effect of prolonged reading on reading performance and visual functions in students with low vision. The study focuses on one of the most common modes of achieving adequate magnification for reading by students with low vision, their close reading distance (proximal or relative distance magnification). Close reading distances impose high demands on near visual functions, such as accommodation and convergence. Previous research on accommodation in children with low vision shows that their accommodative responses are reduced compared to normal vision. In addition, there is an increased lag of accommodation for higher stimulus levels as may occur at close reading distance. Reduced accommodative responses in low vision and higher lag of accommodation at close reading distances together could impact on reading performance of students with low vision especially during prolonged reading tasks. The presence of convergence anomalies could further affect reading performance. Therefore, the aims of the present study were 1) To investigate the effect of prolonged reading on reading performance in students with low vision 2) To investigate the effect of prolonged reading on visual functions in students with low vision. This study was conducted as cross-sectional research on 42 students with low vision and a comparison group of 20 students with normal vision, aged 7 to 20 years. The students with low vision had vision impairments arising from a range of causes and represented a typical group of students with low vision, with no significant developmental delays, attending school in Brisbane, Australia. All participants underwent a battery of clinical tests before and after a prolonged reading task. An initial reading-specific history and pre-task measurements that included Bailey-Lovie distance and near visual acuities, Pelli-Robson contrast sensitivity, ocular deviations, sensory fusion, ocular motility, near point of accommodation (pull-away method), accuracy of accommodation (Monocular Estimation Method (MEM)) retinoscopy and Near Point of Convergence (NPC) (push-up method) were recorded for all participants. Reading performance measures were Maximum Oral Reading Rates (MORR), Near Text Visual Acuity (NTVA) and acuity reserves using Bailey-Lovie text charts. Symptoms of visual fatigue were assessed using the Convergence Insufficiency Symptom Survey (CISS) for all participants. Pre-task measurements of reading performance and accuracy of accommodation and NPC were compared with post-task measurements, to test for any effects of prolonged reading. The prolonged reading task involved reading a storybook silently for at least 30 minutes. The task was controlled for print size, contrast, difficulty level and content of the reading material. Silent Reading Rate (SRR) was recorded every 2 minutes during prolonged reading. Symptom scores and visual fatigue scores were also obtained for all participants. A visual fatigue analogue scale (VAS) was used to assess visual fatigue during the task, once at the beginning, once at the middle and once at the end of the task. In addition to the subjective assessments of visual fatigue, tonic accommodation was monitored using a photorefractor (PlusoptiX CR03™) every 6 minutes during the task, as an objective assessment of visual fatigue. Reading measures were done at the habitual reading distance of students with low vision and at 25 cms for students with normal vision. The initial history showed that the students with low vision read for significantly shorter periods at home compared to the students with normal vision. The working distances of participants with low vision ranged from 3-25 cms and half of them were not using any optical devices for magnification. Nearly half of the participants with low vision were able to resolve 8-point print (1M) at 25 cms. Half of the participants in the low vision group had ocular deviations and suppression at near. Reading rates were significantly reduced in students with low vision compared to those of students with normal vision. In addition, there were a significantly larger number of participants in the low vision group who could not sustain the 30-minute task compared to the normal vision group. However, there were no significant changes in reading rates during or following prolonged reading in either the low vision or normal vision groups. Individual changes in reading rates were independent of their baseline reading rates, indicating that the changes in reading rates during prolonged reading cannot be predicted from a typical clinical assessment of reading using brief reading tasks. Contrary to previous reports the silent reading rates of the students with low vision were significantly lower than their oral reading rates, although oral and silent reading was assessed using different methods. Although the visual acuity, contrast sensitivity, near point of convergence and accuracy of accommodation were significantly poorer for the low vision group compared to those of the normal vision group, there were no significant changes in any of these visual functions following prolonged reading in either group. Interestingly, a few students with low vision (n =10) were found to be reading at a distance closer than their near point of accommodation. This suggests a decreased sensitivity to blur. Further evaluation revealed that the equivalent intrinsic refractive errors (an estimate of the spherical dioptirc defocus which would be expected to yield a patient’s visual acuity in normal subjects) were significantly larger for the low vision group compared to those of the normal vision group. As expected, accommodative responses were significantly reduced for the low vision group compared to the expected norms, which is consistent with their close reading distances, reduced visual acuity and contrast sensitivity. For those in the low vision group who had an accommodative error exceeding their equivalent intrinsic refractive errors, a significant decrease in MORR was found following prolonged reading. The silent reading rates however were not significantly affected by accommodative errors in the present study. Suppression also had a significant impact on the changes in reading rates during prolonged reading. The participants who did not have suppression at near showed significant decreases in silent reading rates during and following prolonged reading. This impact of binocular vision at near on prolonged reading was possibly due to the high demands on convergence. The significant predictors of MORR in the low vision group were age, NTVA, reading interest and reading comprehension, accounting for 61.7% of the variances in MORR. SRR was not significantly influenced by any factors, except for the duration of the reading task sustained; participants with higher reading rates were able to sustain a longer reading duration. In students with normal vision, age was the only predictor of MORR. Participants with low vision also reported significantly greater visual fatigue compared to the normal vision group. Measures of tonic accommodation however were little influenced by visual fatigue in the present study. Visual fatigue analogue scores were found to be significantly associated with reading rates in students with low vision and normal vision. However, the patterns of association between visual fatigue and reading rates were different for SRR and MORR. The participants with low vision with higher symptom scores had lower SRRs and participants with higher visual fatigue had lower MORRs. As hypothesized, visual functions such as accuracy of accommodation and convergence did have an impact on prolonged reading in students with low vision, for students whose accommodative errors were greater than their equivalent intrinsic refractive errors, and for those who did not suppress one eye. Those students with low vision who have accommodative errors higher than their equivalent intrinsic refractive errors might significantly benefit from reading glasses. Similarly, considering prisms or occlusion for those without suppression might reduce the convergence demands in these students while using their close reading distances. The impact of these prescriptions on reading rates, reading interest and visual fatigue is an area of promising future research. Most importantly, it is evident from the present study that a combination of factors such as accommodative errors, near point of convergence and suppression should be considered when prescribing reading devices for students with low vision. Considering these factors would also assist rehabilitation specialists in identifying those students who are likely to experience difficulty in prolonged reading, which is otherwise not reflected during typical clinical reading assessments.
Resumo:
This paper argues a model of complex system design for sustainable architecture within a framework of entropy evolution. The spectrum of sustainable architecture consists of the efficient use of energy and material resource in life-cycle of buildings, the active involvement of the occupants in micro-climate control within buildings, and the natural environmental context. The interactions of the parameters compose a complex system of sustainable architectural design, of which the conventional linear and fragmented design technologies are insufficient to indicate holistic and ongoing environmental performance. The complexity theory of dissipative structure states a microscopic formulation of open system evolution, which provides a system design framework for the evolution of building environmental performance towards an optimization of sustainability in architecture.
Resumo:
An Approach with Vertical Guidance (APV) is an instrument approach procedure which provides horizontal and vertical guidance to a pilot on approach to landing in reduced visibility conditions. APV approaches can greatly reduce the safety risk to general aviation by improving the pilot’s situational awareness. In particular the incidence of Controlled Flight Into Terrain (CFIT) which has occurred in a number of fatal air crashes in general aviation over the past decade in Australia, can be reduced. APV approaches can also improve general aviation operations. If implemented at Australian airports, APV approach procedures are expected to bring a cost saving of millions of dollars to the economy due to fewer missed approaches, diversions and an increased safety benefit. The provision of accurate horizontal and vertical guidance is achievable using the Global Positioning System (GPS). Because aviation is a safety of life application, an aviation-certified GPS receiver must have integrity monitoring or augmentation to ensure that its navigation solution can be trusted. However, the difficulty with the current GPS satellite constellation alone meeting APV integrity requirements, the susceptibility of GPS to jamming or interference and the potential shortcomings of proposed augmentation solutions for Australia such as the Ground-based Regional Augmentation System (GRAS) justifies the investigation of Aircraft Based Augmentation Systems (ABAS) as an alternative integrity solution for general aviation. ABAS augments GPS with other sensors at the aircraft to help it meet the integrity requirements. Typical ABAS designs assume high quality inertial sensors to provide an accurate reference trajectory for Kalman filters. Unfortunately high-quality inertial sensors are too expensive for general aviation. In contrast to these approaches the purpose of this research is to investigate fusing GPS with lower-cost Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMU) and a mathematical model of aircraft dynamics, referred to as an Aircraft Dynamic Model (ADM) in this thesis. Using a model of aircraft dynamics in navigation systems has been studied before in the available literature and shown to be useful particularly for aiding inertial coasting or attitude determination. In contrast to these applications, this thesis investigates its use in ABAS. This thesis presents an ABAS architecture concept which makes use of a MEMS IMU and ADM, named the General Aviation GPS Integrity System (GAGIS) for convenience. GAGIS includes a GPS, MEMS IMU, ADM, a bank of Extended Kalman Filters (EKF) and uses the Normalized Solution Separation (NSS) method for fault detection. The GPS, IMU and ADM information is fused together in a tightly-coupled configuration, with frequent GPS updates applied to correct the IMU and ADM. The use of both IMU and ADM allows for a number of different possible configurations. Three are investigated in this thesis; a GPS-IMU EKF, a GPS-ADM EKF and a GPS-IMU-ADM EKF. The integrity monitoring performance of the GPS-IMU EKF, GPS-ADM EKF and GPS-IMU-ADM EKF architectures are compared against each other and against a stand-alone GPS architecture in a series of computer simulation tests of an APV approach. Typical GPS, IMU, ADM and environmental errors are simulated. The simulation results show the GPS integrity monitoring performance achievable by augmenting GPS with an ADM and low-cost IMU for a general aviation aircraft on an APV approach. A contribution to research is made in determining whether a low-cost IMU or ADM can provide improved integrity monitoring performance over stand-alone GPS. It is found that a reduction of approximately 50% in protection levels is possible using the GPS-IMU EKF or GPS-ADM EKF as well as faster detection of a slowly growing ramp fault on a GPS pseudorange measurement. A second contribution is made in determining how augmenting GPS with an ADM compares to using a low-cost IMU. By comparing the results for the GPS-ADM EKF against the GPS-IMU EKF it is found that protection levels for the GPS-ADM EKF were only approximately 2% higher. This indicates that the GPS-ADM EKF may potentially replace the GPS-IMU EKF for integrity monitoring should the IMU ever fail. In this way the ADM may contribute to the navigation system robustness and redundancy. To investigate this further, a third contribution is made in determining whether or not the ADM can function as an IMU replacement to improve navigation system redundancy by investigating the case of three IMU accelerometers failing. It is found that the failed IMU measurements may be supplemented by the ADM and adequate integrity monitoring performance achieved. Besides treating the IMU and ADM separately as in the GPS-IMU EKF and GPS-ADM EKF, a fourth contribution is made in investigating the possibility of fusing the IMU and ADM information together to achieve greater performance than either alone. This is investigated using the GPS-IMU-ADM EKF. It is found that the GPS-IMU-ADM EKF can achieve protection levels approximately 3% lower in the horizontal and 6% lower in the vertical than a GPS-IMU EKF. However this small improvement may not justify the complexity of fusing the IMU with an ADM in practical systems. Affordable ABAS in general aviation may enhance existing GPS-only fault detection solutions or help overcome any outages in augmentation systems such as the Ground-based Regional Augmentation System (GRAS). Countries such as Australia which currently do not have an augmentation solution for general aviation could especially benefit from the economic savings and safety benefits of satellite navigation-based APV approaches.
Resumo:
The current epidemic of paediatric obesity is consistent with a myriad of health-related comorbid conditions. Despite the higher prevalence of orthopaedic conditions in overweight children, a paucity of published research has considered the influence of these conditions on the ability to undertake physical activity. As physical activity participation is directly related to improvements in physical fitness, skeletal health and metabolic conditions, higher levels of physical activity are encouraged, and exercise is commonly prescribed in the treatment and management of childhood obesity. However, research has not correlated orthopaedic conditions, including the increased joint pain and discomfort that is commonly reported by overweight children, with decreases in physical activity. Research has confirmed that overweight children typically display a slower, more tentative walking pattern with increased forces to the hip, knee and ankle during 'normal' gait. This research, combined with anthropometric data indicating a higher prevalence of musculoskeletal malalignment in overweight children, suggests that such individuals are poorly equipped to undertake certain forms of physical activity. Concomitant increases in obesity and decreases in physical activity level strongly support the need to better understand the musculoskeletal factors associated with the performance of motor tasks by overweight and obese children.
Resumo:
In the quest for shorter time-to-market, higher quality and reduced cost, model-driven software development has emerged as a promising approach to software engineering. The central idea is to promote models to first-class citizens in the development process. Starting from a set of very abstract models in the early stage of the development, they are refined into more concrete models and finally, as a last step, into code. As early phases of development focus on different concepts compared to later stages, various modelling languages are employed to most accurately capture the concepts and relations under discussion. In light of this refinement process, translating between modelling languages becomes a time-consuming and error-prone necessity. This is remedied by model transformations providing support for reusing and automating recurring translation efforts. These transformations typically can only be used to translate a source model into a target model, but not vice versa. This poses a problem if the target model is subject to change. In this case the models get out of sync and therefore do not constitute a coherent description of the software system anymore, leading to erroneous results in later stages. This is a serious threat to the promised benefits of quality, cost-saving, and time-to-market. Therefore, providing a means to restore synchronisation after changes to models is crucial if the model-driven vision is to be realised. This process of reflecting changes made to a target model back to the source model is commonly known as Round-Trip Engineering (RTE). While there are a number of approaches to this problem, they impose restrictions on the nature of the model transformation. Typically, in order for a transformation to be reversed, for every change to the target model there must be exactly one change to the source model. While this makes synchronisation relatively “easy”, it is ill-suited for many practically relevant transformations as they do not have this one-to-one character. To overcome these issues and to provide a more general approach to RTE, this thesis puts forward an approach in two stages. First, a formal understanding of model synchronisation on the basis of non-injective transformations (where a number of different source models can correspond to the same target model) is established. Second, detailed techniques are devised that allow the implementation of this understanding of synchronisation. A formal underpinning for these techniques is drawn from abductive logic reasoning, which allows the inference of explanations from an observation in the context of a background theory. As non-injective transformations are the subject of this research, there might be a number of changes to the source model that all equally reflect a certain target model change. To help guide the procedure in finding “good” source changes, model metrics and heuristics are investigated. Combining abductive reasoning with best-first search and a “suitable” heuristic enables efficient computation of a number of “good” source changes. With this procedure Round-Trip Engineering of non-injective transformations can be supported.