899 resultados para uncertanin nonholonomic dynamic system
Resumo:
The complex of questions connected with the analysis, estimation and structural-parametrical optimization of dynamic system is considered in this article. Connection of such problems with tasks of control by beams of trajectories is emphasized. The special attention is concentrated on the review and analysis of spent scientific researches, the attention is stressed to their constructability and applied directedness. Efficiency of the developed algorithmic and software is demonstrated on the tasks of modeling and optimization of output beam characteristics in linear resonance accelerators.
Resumo:
Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^
Resumo:
This is an investigation on the development of a numerical assessment method for the hydrodynamic performance of an oscillating water column (OWC) wave energy converter. In the research work, a systematic study has been carried out on how the hydrodynamic problem can be solved and represented reliably, focusing on the phenomena of the interactions of the wave-structure and the wave-internal water surface. These phenomena are extensively examined numerically to show how the hydrodynamic parameters can be reliably obtained and used for the OWC performance assessment. In studying the dynamic system, a two-body system is used for the OWC wave energy converter. The first body is the device itself, and the second body is an imaginary “piston,” which replaces part of the water at the internal water surface in the water column. One advantage of the two-body system for an OWC wave energy converter is its physical representations, and therefore, the relevant mathematical expressions and the numerical simulation can be straightforward. That is, the main hydrodynamic parameters can be assessed using the boundary element method of the potential flow in frequency domain, and the relevant parameters are transformed directly from frequency domain to time domain for the two-body system. However, as it is shown in the research, an appropriate representation of the “imaginary” piston is very important, especially when the relevant parameters have to be transformed from frequency-domain to time domain for a further analysis. The examples given in the research have shown that the correct parameters transformed from frequency domain to time domain can be a vital factor for a successful numerical simulation.
Resumo:
A lightweight Java application suite has been developed and deployed allowing collaborative learning between students and tutors at remote locations. Students can engage in group activities online and also collaborate with tutors. A generic Java framework has been developed and applied to electronics, computing and mathematics education. The applications are respectively: (a) a digital circuit simulator, which allows students to collaborate in building simple or complex electronic circuits; (b) a Java programming environment where the paradigm is behavioural-based robotics, and (c) a differential equation solver useful in modelling of any complex and nonlinear dynamic system. Each student sees a common shared window on which may be added text or graphical objects and which can then be shared online. A built-in chat room supports collaborative dialogue. Students can work either in collaborative groups or else in teams as directed by the tutor. This paper summarises the technical architecture of the system as well as the pedagogical implications of the suite. A report of student evaluation is also presented distilled from use over a period of twelve months. We intend this suite to facilitate learning between groups at one or many institutions and to facilitate international collaboration. We also intend to use the suite as a tool to research the establishment and behaviour of collaborative learning groups. We shall make our software freely available to interested researchers.
Resumo:
Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.
Resumo:
Since the end of the Cold War, recurring civil conflicts have been the dominant form of violent armed conflict in the world, accounting for 70% of conflicts active between 2000-2013. Duration and intensity of episodes within recurring conflicts in Africa exhibit four behaviors characteristic of archetypal dynamic system structures. The overarching questions asked in this study are whether these patterns are robustly correlated with fundamental concepts of resiliency in dynamic systems that scale from micro-to macro levels; are they consistent with theoretical risk factors and causal mechanisms; and what are the policy implications. Econometric analysis and dynamic systems modeling of 36 conflicts in Africa between 1989 -2014 are combined with process tracing in a case study of Somalia to evaluate correlations between state characteristics, peace operations and foreign aid on the likelihood of observed conflict patterns, test hypothesized causal mechanisms across scales, and develop policy recommendations for increasing human security while decreasing resiliency of belligerents. Findings are that observed conflict patterns scale from micro to macro levels; are strongly correlated with state characteristics that proxy a mix of cooperative (e.g., gender equality) and coercive (e.g., security forces) conflict-balancing mechanisms; and are weakly correlated with UN and regional peace operations and humanitarian aid. Interactions between peace operations and aid interventions that effect conflict persistence at micro levels are not seen in macro level analysis, due to interdependent, micro-level feedback mechanisms, sequencing, and lagged effects. This study finds that the dynamic system structures associated with observed conflict patterns contain tipping points between balancing mechanisms at the interface of micro-macro level interactions that are determined as much by factors related to how intervention policies are designed and implemented, as what they are. Policy implications are that reducing risk of conflict persistence requires that peace operations and aid interventions (1) simultaneously increase transparency, promote inclusivity (with emphasis on gender equality), and empower local civilian involvement in accountability measures at the local levels; (2) build bridges to horizontally and vertically integrate across levels; and (3) pave pathways towards conflict transformation mechanisms and justice that scale from the individual, to community, regional, and national levels.
Resumo:
Scottish sandstone buildings are now suffering the long-term effects of salt-crystallisation damage, owing in part to the repeated deposition of de-icing salts during winter months. The use of de-icing salts is necessary in order to maintain safe road and pavement conditions during cold weather, but their use comes at a price. Sodium chloride (NaCl), which is used as the primary de-icing salt throughout the country, is a salt known to be damaging to sandstone masonry. However, there remains a range of alternative, commercially available de-icing salts. It is unknown however, what effect these salts have on porous building materials, such as sandstone. In order to protect our built heritage against salt-induced decay, it is vital to understand the effects of these different salts on the range of sandstone types that we see within the historic buildings of Scotland. Eleven common types of sandstone were characterised using a suite of methods in order to understand their mineralogy, pore structure and their response to moisture movement, which are vital properties that govern a stone’s response to weathering and decay. Sandstones were then placed through a range of durability tests designed to measure their resistance to various weathering processes. Three salt crystallisation tests were undertaken on the sandstones over a range of 16 to 50 cycles, which tested their durability to NaCl, CaCl2, MgCl2 and a chloride blend salt. Samples were primarily analysed by measuring their dry weight loss after each cycle, visually after each cycle and by other complimentary methods in order to understand their changing response to moisture uptake after salt treatment. Salt crystallisation was identified as the primary mechanism of decay across each salt, with the extent of damage in each sandstone influenced by environmental conditions and pore-grain properties of the stone. Damage recorded in salt crystallisation tests was ultimately caused by the generation of high crystallisation pressures within the confined pore networks of each stone. Stone and test-specific parameters controlled the location and magnitude of damage, with the amount of micro-pores, their spatial distribution, the water absorption coefficient and the drying efficiency of each stone being identified as the most important stone-specific properties influencing salt-induced decay. Strong correlations were found between the dry weight loss of NaCl treated samples and the proportion of pores <1µm in diameter. Crystallisation pressures are known to scale inversely with pore size, while the spatial distribution of these micro-pores is thought to influence the rate, overall extent and type of decay within the stone by concentrating crystallisation pressures in specific regions of the stone. The water absorption determines the total amount of moisture entering into the stone, which represents the total amount of void space for salt crystallisation. The drying parameters on the other hand, ultimately control the distribution of salt crystallisation. Those stones that were characterised by a combination of a high proportion of micro-pores, high water absorption values and slow drying kinetics were shown to be most vulnerable to NaCl-induced decay. CaCl2 and MgCl2 are shown to have similar crystallisation behaviour, forming thin crystalline sheets under low relative humidity and/or high temperature conditions. Distinct differences in their behaviour that are influenced by test specific criteria were identified. The location of MgCl2 crystallisation close to the stone surface, as influenced by prolonged drying under moderate temperature drying conditions, was identified as the main factor that caused substantial dry weight loss in specific stone types. CaCl2 solutions remained unaffected under these conditions and only crystallised under high temperatures. Homogeneous crystallisation of CaCl2 throughout the stone produced greater internal change, with little dry weight loss recorded. NaCl formed distinctive isometric hopper crystals that caused damage through the non-equilibrium growth of salts in trapped regions of the stone. Damage was sustained as granular decay and contour scaling across most stone types. The pore network and hydric properties of the stones continually evolve in response to salt crystallisation, creating a dynamic system whereby the initial, known properties of clean quarried stone will not continually govern the processes of salt crystallisation, nor indeed can they continually predict the behaviour of stone to salt-induced decay.
Resumo:
This thesis reports on an investigation of the feasibility and usefulness of incorporating dynamic management facilities for managing sensed context data in a distributed contextaware mobile application. The investigation focuses on reducing the work required to integrate new sensed context streams in an existing context aware architecture. Current architectures require integration work for new streams and new contexts that are encountered. This means of operation is acceptable for current fixed architectures. However, as systems become more mobile the number of discoverable streams increases. Without the ability to discover and use these new streams the functionality of any given device will be limited to the streams that it knows how to decode. The integration of new streams requires that the sensed context data be understood by the current application. If the new source provides data of a type that an application currently requires then the new source should be connected to the application without any prior knowledge of the new source. If the type is similar and can be converted then this stream too should be appropriated by the application. Such applications are based on portable devices (phones, PDAs) for semi-autonomous services that use data from sensors connected to the devices, plus data exchanged with other such devices and remote servers. Such applications must handle input from a variety of sensors, refining the data locally and managing its communication from the device in volatile and unpredictable network conditions. The choice to focus on locally connected sensory input allows for the introduction of privacy and access controls. This local control can determine how the information is communicated to others. This investigation focuses on the evaluation of three approaches to sensor data management. The first system is characterised by its static management based on the pre-pended metadata. This was the reference system. Developed for a mobile system, the data was processed based on the attached metadata. The code that performed the processing was static. The second system was developed to move away from the static processing and introduce a greater freedom of handling for the data stream, this resulted in a heavy weight approach. The approach focused on pushing the processing of the data into a number of networked nodes rather than the monolithic design of the previous system. By creating a separate communication channel for the metadata it is possible to be more flexible with the amount and type of data transmitted. The final system pulled the benefits of the other systems together. By providing a small management class that would load a separate handler based on the incoming data, Dynamism was maximised whilst maintaining ease of code understanding. The three systems were then compared to highlight their ability to dynamically manage new sensed context. The evaluation took two approaches, the first is a quantitative analysis of the code to understand the complexity of the relative three systems. This was done by evaluating what changes to the system were involved for the new context. The second approach takes a qualitative view of the work required by the software engineer to reconfigure the systems to provide support for a new data stream. The evaluation highlights the various scenarios in which the three systems are most suited. There is always a trade-o↵ in the development of a system. The three approaches highlight this fact. The creation of a statically bound system can be quick to develop but may need to be completely re-written if the requirements move too far. Alternatively a highly dynamic system may be able to cope with new requirements but the developer time to create such a system may be greater than the creation of several simpler systems.
Resumo:
Este artículo evalúa la relación de causalidad entre la gestión del conocimiento y las capacidades de innovación tecnológica, y el efecto de esta relación sobre los resultados operacionales del sector textil en la ciudad de Medellín. Se empleó la metodología de dinámica de sistemas, con simulación de escenarios para valorar las condiciones actuales de las organizaciones del sector en términos de acumulación de conocimiento y capacidades. La información se obtuvo mediante entrevistas a expertos y acceso a información especializada del sector. Se evidencia que una mejora de la relación entre la gestión del conocimiento e innovación tecnológica genera un incremento aproximado del 15% en los ingresos operacionales del sector. Asimismo, se encontró que a medida que las variables comunes de interés (Estrategias organizacionales, canales de comunicación, formación, cultura, acciones de fortalecimiento en I+D), se acercan a los valores deseados, la acumulación de conocimiento y de capacidades de innovación tecnológica alcanzan los valores objetivos.
Resumo:
El sector eléctrico está experimentando cambios importantes tanto a nivel de gestión como a nivel de mercado. Una de las claves que están acelerando este cambio es la penetración cada vez mayor de los Sistemas de Generación Distribuida (DER), que están dando un mayor protagonismo al usuario a la hora de plantear la gestión del sistema eléctrico. La complejidad del escenario que se prevé en un futuro próximo, exige que los equipos de la red tenga la capacidad de interactuar en un sistema mucho más dinámico que en el presente, donde la interfaz de conexión deberá estar dotada de la inteligencia necesaria y capacidad de comunicación para que todo el sistema pueda ser gestionado en su conjunto de manera eficaz. En la actualidad estamos siendo testigos de la transición desde el modelo de sistema eléctrico tradicional hacia un nuevo sistema, activo e inteligente, que se conoce como Smart Grid. En esta tesis se presenta el estudio de un Dispositivo Electrónico Inteligente (IED) orientado a aportar soluciones para las necesidades que la evolución del sistema eléctrico requiere, que sea capaz de integrase en el equipamiento actual y futuro de la red, aportando funcionalidades y por tanto valor añadido a estos sistemas. Para situar las necesidades de estos IED se ha llevado a cabo un amplio estudio de antecedentes, comenzando por analizar la evolución histórica de estos sistemas, las características de la interconexión eléctrica que han de controlar, las diversas funciones y soluciones que deben aportar, llegando finalmente a una revisión del estado del arte actual. Dentro de estos antecedentes, también se lleva a cabo una revisión normativa, a nivel internacional y nacional, necesaria para situarse desde el punto de vista de los distintos requerimientos que deben cumplir estos dispositivos. A continuación se exponen las especificaciones y consideraciones necesarias para su diseño, así como su arquitectura multifuncional. En este punto del trabajo, se proponen algunos enfoques originales en el diseño, relacionados con la arquitectura del IED y cómo deben sincronizarse los datos, dependiendo de la naturaleza de los eventos y las distintas funcionalidades. El desarrollo del sistema continua con el diseño de los diferentes subsistemas que lo componen, donde se presentan algunos algoritmos novedosos, como el enfoque del sistema anti-islanding con detección múltiple ponderada. Diseñada la arquitectura y funciones del IED, se expone el desarrollo de un prototipo basado en una plataforma hardware. Para ello se analizan los requisitos necesarios que debe tener, y se justifica la elección de una plataforma embebida de altas prestaciones que incluye un procesador y una FPGA. El prototipo desarrollado se somete a un protocolo de pruebas de Clase A, según las normas IEC 61000-4-30 e IEC 62586-2, para comprobar la monitorización de parámetros. También se presentan diversas pruebas en las que se han estimado los retardos implicados en los algoritmos relacionados con las protecciones. Finalmente se comenta un escenario de prueba real, dentro del contexto de un proyecto del Plan Nacional de Investigación, donde este prototipo ha sido integrado en un inversor dotándole de la inteligencia necesaria para un futuro contexto Smart Grid.
Resumo:
Cardiovascular diseases (CVDs) have reached an epidemic proportion in the US and worldwide with serious consequences in terms of human suffering and economic impact. More than one third of American adults are suffering from CVDs. The total direct and indirect costs of CVDs are more than $500 billion per year. Therefore, there is an urgent need to develop noninvasive diagnostics methods, to design minimally invasive assist devices, and to develop economical and easy-to-use monitoring systems for cardiovascular diseases. In order to achieve these goals, it is necessary to gain a better understanding of the subsystems that constitute the cardiovascular system. The aorta is one of these subsystems whose role in cardiovascular functioning has been underestimated. Traditionally, the aorta and its branches have been viewed as resistive conduits connected to an active pump (left ventricle of the heart). However, this perception fails to explain many observed physiological results. My goal in this thesis is to demonstrate the subtle but important role of the aorta as a system, with focus on the wave dynamics in the aorta.
The operation of a healthy heart is based on an optimized balance between its pumping characteristics and the hemodynamics of the aorta and vascular branches. The delicate balance between the aorta and heart can be impaired due to aging, smoking, or disease. The heart generates pulsatile flow that produces pressure and flow waves as it enters into the compliant aorta. These aortic waves propagate and reflect from reflection sites (bifurcations and tapering). They can act constructively and assist the blood circulation. However, they may act destructively, promoting diseases or initiating sudden cardiac death. These waves also carry information about the diseases of the heart, vascular disease, and coupling of heart and aorta. In order to elucidate the role of the aorta as a dynamic system, the interplay between the dominant wave dynamic parameters is investigated in this study. These parameters are heart rate, aortic compliance (wave speed), and locations of reflection sites. Both computational and experimental approaches have been used in this research. In some cases, the results are further explained using theoretical models.
The main findings of this study are as follows: (i) developing a physiologically realistic outflow boundary condition for blood flow modeling in a compliant vasculature; (ii) demonstrating that pulse pressure as a single index cannot predict the true level of pulsatile workload on the left ventricle; (iii) proving that there is an optimum heart rate in which the pulsatile workload of the heart is minimized and that the optimum heart rate shifts to a higher value as aortic rigidity increases; (iv) introducing a simple bio-inspired device for correction and optimization of aortic wave reflection that reduces the workload on the heart; (v) deriving a non-dimensional number that can predict the optimum wave dynamic state in a mammalian cardiovascular system; (vi) demonstrating that waves can create a pumping effect in the aorta; (vii) introducing a system parameter and a new medical index, Intrinsic Frequency, that can be used for noninvasive diagnosis of heart and vascular diseases; and (viii) proposing a new medical hypothesis for sudden cardiac death in young athletes.
Resumo:
The biofilms microbial forms of association are responsible for generating, accelerating and / or induce the process of corrosion. The damage generated in the petroleum industry for this type of corrosion is significatives, representing major investment for your control. The aim of this study was to evaluate such tests antibiograms the effects of extracts of Jatropha curcas and essential oil of Lippia gracilis Schauer on microrganisms isolated from water samples and, thereafter, select the most effective natural product for further evaluation of biofilms formed in dynamic system. Extracts of J. curcas were not efficient on the complete inhibition of microbial growth in tests type antibiogram, and essential oil of L. gracilis Schauer most effective and determined for the other tests. A standard concentration of essential oil of 20 μL was chosen and established for the evaluation of the biofilms and the rate of corrosion. The biocide effect was determined by microbial counts of five types of microorganisms: aerobic bacteria, precipitating iron, total anaerobic, sulphate reducers (BRS) and fungi. The rate of corrosion was measured by loss of mass. Molecular identification and scanning electron microscopy (SEM) were performed. The data showed reduction to zero of the most probable number (MPN) of bacteria precipitating iron and BRS from 115 and 113 minutes of contact, respectively. There was also inhibited in fungi, reducing to zero the rate of colony-forming units (CFU) from 74 minutes of exposure. However, for aerobic and anaerobic bacteria there was no significant difference in the time of exposure to the essential oil, remaining constant. The rate of corrosion was also influenced by the presence of oil. The essential oil of L. gracilis was shown to be potentially effective
Resumo:
An Approach with Vertical Guidance (APV) is an instrument approach procedure which provides horizontal and vertical guidance to a pilot on approach to landing in reduced visibility conditions. APV approaches can greatly reduce the safety risk to general aviation by improving the pilot’s situational awareness. In particular the incidence of Controlled Flight Into Terrain (CFIT) which has occurred in a number of fatal air crashes in general aviation over the past decade in Australia, can be reduced. APV approaches can also improve general aviation operations. If implemented at Australian airports, APV approach procedures are expected to bring a cost saving of millions of dollars to the economy due to fewer missed approaches, diversions and an increased safety benefit. The provision of accurate horizontal and vertical guidance is achievable using the Global Positioning System (GPS). Because aviation is a safety of life application, an aviation-certified GPS receiver must have integrity monitoring or augmentation to ensure that its navigation solution can be trusted. However, the difficulty with the current GPS satellite constellation alone meeting APV integrity requirements, the susceptibility of GPS to jamming or interference and the potential shortcomings of proposed augmentation solutions for Australia such as the Ground-based Regional Augmentation System (GRAS) justifies the investigation of Aircraft Based Augmentation Systems (ABAS) as an alternative integrity solution for general aviation. ABAS augments GPS with other sensors at the aircraft to help it meet the integrity requirements. Typical ABAS designs assume high quality inertial sensors to provide an accurate reference trajectory for Kalman filters. Unfortunately high-quality inertial sensors are too expensive for general aviation. In contrast to these approaches the purpose of this research is to investigate fusing GPS with lower-cost Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMU) and a mathematical model of aircraft dynamics, referred to as an Aircraft Dynamic Model (ADM) in this thesis. Using a model of aircraft dynamics in navigation systems has been studied before in the available literature and shown to be useful particularly for aiding inertial coasting or attitude determination. In contrast to these applications, this thesis investigates its use in ABAS. This thesis presents an ABAS architecture concept which makes use of a MEMS IMU and ADM, named the General Aviation GPS Integrity System (GAGIS) for convenience. GAGIS includes a GPS, MEMS IMU, ADM, a bank of Extended Kalman Filters (EKF) and uses the Normalized Solution Separation (NSS) method for fault detection. The GPS, IMU and ADM information is fused together in a tightly-coupled configuration, with frequent GPS updates applied to correct the IMU and ADM. The use of both IMU and ADM allows for a number of different possible configurations. Three are investigated in this thesis; a GPS-IMU EKF, a GPS-ADM EKF and a GPS-IMU-ADM EKF. The integrity monitoring performance of the GPS-IMU EKF, GPS-ADM EKF and GPS-IMU-ADM EKF architectures are compared against each other and against a stand-alone GPS architecture in a series of computer simulation tests of an APV approach. Typical GPS, IMU, ADM and environmental errors are simulated. The simulation results show the GPS integrity monitoring performance achievable by augmenting GPS with an ADM and low-cost IMU for a general aviation aircraft on an APV approach. A contribution to research is made in determining whether a low-cost IMU or ADM can provide improved integrity monitoring performance over stand-alone GPS. It is found that a reduction of approximately 50% in protection levels is possible using the GPS-IMU EKF or GPS-ADM EKF as well as faster detection of a slowly growing ramp fault on a GPS pseudorange measurement. A second contribution is made in determining how augmenting GPS with an ADM compares to using a low-cost IMU. By comparing the results for the GPS-ADM EKF against the GPS-IMU EKF it is found that protection levels for the GPS-ADM EKF were only approximately 2% higher. This indicates that the GPS-ADM EKF may potentially replace the GPS-IMU EKF for integrity monitoring should the IMU ever fail. In this way the ADM may contribute to the navigation system robustness and redundancy. To investigate this further, a third contribution is made in determining whether or not the ADM can function as an IMU replacement to improve navigation system redundancy by investigating the case of three IMU accelerometers failing. It is found that the failed IMU measurements may be supplemented by the ADM and adequate integrity monitoring performance achieved. Besides treating the IMU and ADM separately as in the GPS-IMU EKF and GPS-ADM EKF, a fourth contribution is made in investigating the possibility of fusing the IMU and ADM information together to achieve greater performance than either alone. This is investigated using the GPS-IMU-ADM EKF. It is found that the GPS-IMU-ADM EKF can achieve protection levels approximately 3% lower in the horizontal and 6% lower in the vertical than a GPS-IMU EKF. However this small improvement may not justify the complexity of fusing the IMU with an ADM in practical systems. Affordable ABAS in general aviation may enhance existing GPS-only fault detection solutions or help overcome any outages in augmentation systems such as the Ground-based Regional Augmentation System (GRAS). Countries such as Australia which currently do not have an augmentation solution for general aviation could especially benefit from the economic savings and safety benefits of satellite navigation-based APV approaches.