834 resultados para Navigation systems.
Resumo:
Distributed generators (DGs) are defined as generators that are connected to a distribution network. The direction of the power flow and short-circuit current in a network could be changed compared with one without DGs. The conventional protective relay scheme does not meet the requirement in this emerging situation. As the number and capacity of DGs in the distribution network increase, the problem of coordinating protective relays becomes more challenging. Given this background, the protective relay coordination problem in distribution systems is investigated, with directional overcurrent relays taken as an example, and formulated as a mixed integer nonlinear programming problem. A mathematical model describing this problem is first developed, and the well-developed differential evolution algorithm is then used to solve it. Finally, a sample system is used to demonstrate the feasiblity and efficiency of the developed method.
Resumo:
Children who have suffered physical or sexual abuse are as vulnerable as adult trauma victims to experience "secondary trauma", in which the reactions of the family or broader system exacerbate the child's difficulties. Three clinical cases (a 7 yr old male, an 8 yr old male, and a 7 yr old female) are presented that suggest that this secondary trauma can be made worse by either excessive or insufficient provision of individual child psychotherapy, and the way the system interprets and reacts to these clinical decisions. Types of secondary trauma and their interactions with clinical decisions are discussed. Ways of framing clinical decisions to minimize the potential secondary trauma are presented.
Resumo:
One of the major challenges in achieving long term robot autonomy is the need for a SLAM algorithm that can perform SLAM over the operational lifetime of the robot, preferably without human intervention or supervision. In this paper we present insights gained from a two week long persistent SLAM experiment, in which a Pioneer robot performed mock deliveries in a busy office environment. We used the biologically inspired visual SLAM system, RatSLAM, combined with a hybrid control architecture that selected between exploring the environment, performing deliveries, and recharging. The robot performed more than a thousand successful deliveries with only one failure (from which it recovered), travelled more than 40 km over 37 hours of active operation, and recharged autonomously 23 times. We discuss several issues arising from the success (and limitations) of this experiment and two subsequent avenues of work.
Resumo:
The future vehicle navigation for safety applications requires seamless positioning at the accuracy of sub-meter or better. However, standalone Global Positioning System (GPS) or Differential GPS (DGPS) suffer from solution outages while being used in restricted areas such as high-rise urban areas and tunnels due to the blockages of satellite signals. Smoothed DGPS can provide sub-meter positioning accuracy, but not the seamless requirement. A disadvantage of the traditional navigation aids such as Dead Reckoning and Inertial Measurement Unit onboard vehicles are either not accurate enough due to error accumulation or too expensive to be acceptable by the mass market vehicle users. One of the alternative technologies is to use the wireless infrastructure installed in roadside to locate vehicles in regions where the Global Navigation Satellite Systems (GNSS) signals are not available (for example: inside tunnels, urban canyons and large indoor car parks). The examples of roadside infrastructure which can be potentially used for positioning purposes could include Wireless Local Area Network (WLAN)/Wireless Personal Area Network (WPAN) based positioning systems, Ultra-wide band (UWB) based positioning systems, Dedicated Short Range Communication (DSRC) devices, Locata’s positioning technology, and accurate road surface height information over selected road segments such as tunnels. This research reviews and compares the possible wireless technologies that could possibly be installed along roadside for positioning purposes. Models and algorithms of integrating different positioning technologies are also presented. Various simulation schemes are designed to examine the performance benefits of united GNSS and roadside infrastructure for vehicle positioning. The results from these experimental studies have shown a number of useful findings. It is clear that in the open road environment where sufficient satellite signals can be obtained, the roadside wireless measurements contribute very little to the improvement of positioning accuracy at the sub-meter level, especially in the dual constellation cases. In the restricted outdoor environments where only a few GPS satellites, such as those with 45 elevations, can be received, the roadside distance measurements can help improve both positioning accuracy and availability to the sub-meter level. When the vehicle is travelling in tunnels with known heights of tunnel surfaces and roadside distance measurements, the sub-meter horizontal positioning accuracy is also achievable. Overall, simulation results have demonstrated that roadside infrastructure indeed has the potential to provide sub-meter vehicle position solutions for certain road safety applications if the properly deployed roadside measurements are obtainable.
Resumo:
This project investigates machine listening and improvisation in interactive music systems with the goal of improvising musically appropriate accompaniment to an audio stream in real-time. The input audio may be from a live musical ensemble, or playback of a recording for use by a DJ. I present a collection of robust techniques for machine listening in the context of Western popular dance music genres, and strategies of improvisation to allow for intuitive and musically salient interaction in live performance. The findings are embodied in a computational agent – the Jambot – capable of real-time musical improvisation in an ensemble setting. Conceptually the agent’s functionality is split into three domains: reception, analysis and generation. The project has resulted in novel techniques for addressing a range of issues in each of these domains. In the reception domain I present a novel suite of onset detection algorithms for real-time detection and classification of percussive onsets. This suite achieves reasonable discrimination between the kick, snare and hi-hat attacks of a standard drum-kit, with sufficiently low-latency to allow perceptually simultaneous triggering of accompaniment notes. The onset detection algorithms are designed to operate in the context of complex polyphonic audio. In the analysis domain I present novel beat-tracking and metre-induction algorithms that operate in real-time and are responsive to change in a live setting. I also present a novel analytic model of rhythm, based on musically salient features. This model informs the generation process, affording intuitive parametric control and allowing for the creation of a broad range of interesting rhythms. In the generation domain I present a novel improvisatory architecture drawing on theories of music perception, which provides a mechanism for the real-time generation of complementary accompaniment in an ensemble setting. All of these innovations have been combined into a computational agent – the Jambot, which is capable of producing improvised percussive musical accompaniment to an audio stream in real-time. I situate the architectural philosophy of the Jambot within contemporary debate regarding the nature of cognition and artificial intelligence, and argue for an approach to algorithmic improvisation that privileges the minimisation of cognitive dissonance in human-computer interaction. This thesis contains extensive written discussions of the Jambot and its component algorithms, along with some comparative analyses of aspects of its operation and aesthetic evaluations of its output. The accompanying CD contains the Jambot software, along with video documentation of experiments and performances conducted during the project.
Resumo:
In recent times, light gauge steel framed (LSF) structures, such as cold-formed steel wall systems, are increasingly used, but without a full understanding of their fire performance. Traditionally the fire resistance rating of these load-bearing LSF wall systems is based on approximate prescriptive methods developed based on limited fire tests. Very often they are limited to standard wall configurations used by the industry. Increased fire rating is provided simply by adding more plasterboards to these walls. This is not an acceptable situation as it not only inhibits innovation and structural and cost efficiencies but also casts doubt over the fire safety of these wall systems. Hence a detailed fire research study into the performance of LSF wall systems was undertaken using full scale fire tests and extensive numerical studies. A new composite wall panel developed at QUT was also considered in this study, where the insulation was used externally between the plasterboards on both sides of the steel wall frame instead of locating it in the cavity. Three full scale fire tests of LSF wall systems built using the new composite panel system were undertaken at a higher load ratio using a gas furnace designed to deliver heat in accordance with the standard time temperature curve in AS 1530.4 (SA, 2005). Fire tests included the measurements of load-deformation characteristics of LSF walls until failure as well as associated time-temperature measurements across the thickness and along the length of all the specimens. Tests of LSF walls under axial compression load have shown the improvement to their fire performance and fire resistance rating when the new composite panel was used. Hence this research recommends the use of the new composite panel system for cold-formed LSF walls. The numerical study was undertaken using a finite element program ABAQUS. The finite element analyses were conducted under both steady state and transient state conditions using the measured hot and cold flange temperature distributions from the fire tests. The elevated temperature reduction factors for mechanical properties were based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). These finite element models were first validated by comparing their results with experimental test results from this study and Kolarkar (2010). The developed finite element models were able to predict the failure times within 5 minutes. The validated model was then used in a detailed numerical study into the strength of cold-formed thin-walled steel channels used in both the conventional and the new composite panel systems to increase the understanding of their behaviour under nonuniform elevated temperature conditions and to develop fire design rules. The measured time-temperature distributions obtained from the fire tests were used. Since the fire tests showed that the plasterboards provided sufficient lateral restraint until the failure of LSF wall panels, this assumption was also used in the analyses and was further validated by comparison with experimental results. Hence in this study of LSF wall studs, only the flexural buckling about the major axis and local buckling were considered. A new fire design method was proposed using AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the above design codes to predict the failure load ratio versus time and temperature for varying LSF wall configurations including insulations. Idealised time-temperature profiles were developed based on the measured temperature values of the studs. This was used in a detailed numerical study to fully understand the structural behaviour of LSF wall panels. Appropriate equations were proposed to find the critical temperatures for different composite panels, varying in steel thickness, steel grade and screw spacing for any load ratio. Hence useful and simple design rules were proposed based on the current cold-formed steel structures and fire design standards, and their accuracy and advantages were discussed. The results were also used to validate the fire design rules developed based on AS/NZS 4600 (SA, 2005) and Eurocode Part 1.3 (ECS, 2006). This demonstrated the significant improvements to the design method when compared to the currently used prescriptive design methods for LSF wall systems under fire conditions. In summary, this research has developed comprehensive experimental and numerical thermal and structural performance data for both the conventional and the proposed new load bearing LSF wall systems under standard fire conditions. Finite element models were developed to predict the failure times of LSF walls accurately. Idealized hot flange temperature profiles were developed for non-insulated, cavity and externally insulated load bearing wall systems. Suitable fire design rules and spread sheet based design tools were developed based on the existing standards to predict the ultimate failure load, failure times and failure temperatures of LSF wall studs. Simplified equations were proposed to find the critical temperatures for varying wall panel configurations and load ratios. The results from this research are useful to both structural and fire engineers and researchers. Most importantly, this research has significantly improved the knowledge and understanding of cold-formed LSF loadbearing walls under standard fire conditions.
Resumo:
A distributed fuzzy system is a real-time fuzzy system in which the input, output and computation may be located on different networked computing nodes. The ability for a distributed software application, such as a distributed fuzzy system, to adapt to changes in the computing network at runtime can provide real-time performance improvement and fault-tolerance. This paper introduces an Adaptable Mobile Component Framework (AMCF) that provides a distributed dataflow-based platform with a fine-grained level of runtime reconfigurability. The execution location of small fragments (possibly as little as few machine-code instructions) of an AMCF application can be moved between different computing nodes at runtime. A case study is included that demonstrates the applicability of the AMCF to a distributed fuzzy system scenario involving multiple physical agents (such as autonomous robots). Using the AMCF, fuzzy systems can now be developed such that they can be distributed automatically across multiple computing nodes and are adaptable to runtime changes in the networked computing environment. This provides the opportunity to improve the performance of fuzzy systems deployed in scenarios where the computing environment is resource-constrained and volatile, such as multiple autonomous robots, smart environments and sensor networks.
Resumo:
Design Science Research (DSR) has emerged as an important approach in Information Systems (IS) research, evidenced by the plethora of recent related articles in recognized IS outlets. Nonetheless, discussion continues on the value of DSR for IS and how to conduct strong DSR, with further discussion necessary to better position DSR as a mature and stable research paradigm appropriate for IS. This paper contributes to address this need, by providing a comprehensive conceptual and argumentative positioning of DSR relative to the core of IS. This paper seeks to argue the relevance of DSR as a paradigm that addresses the core of IS discipline well. Here we use the framework defined by Wand and Weber, to position what the core of IS is.
Resumo:
It is accepted that the efficiency of sugar cane clarification is closely linked with sugar juice composition (including suspended or insoluble impurities), the inorganic phosphate content, the liming condition and type, and the interactions between the juice components. These interactions are not well understood, particularly those between calcium, phosphate, and sucrose in sugar cane juice. Studies have been conducted on calcium oxide (CaO)/phosphate/sucrose systems in both synthetic and factory juices to provide further information on the defecation process (i.e., simple liming to effect impurity removal) and to identify an effective clarification process that would result in reduced scaling of sugar factory evaporators, pans, and centrifugals. Results have shown that a two-stage process involving the addition of lime saccharate to a set juice pH followed by the addition of sodium hydroxide to a final juice pH or a similar two-stage process where the order of addition of the alkalis is reversed prior to clarification reduces the impurity loading of the clarified juice compared to that of the clarified juice obtained by the conventional defecation process. The treatment process showed reductions in CaO (27% to 50%) and MgO (up to 20%) in clarified juices with no apparent loss in juice clarity or increase in residence time of the mud particles compared to those in the conventional process. There was also a reduction in the SiO2 content. However, the disadvantage of this process is the significant increase in the Na2O content.
Resumo:
An increase in the likelihood of navigational collisions in port waters has put focus on the collision avoidance process in port traffic safety. The most widely used on-board collision avoidance system is the automatic radar plotting aid which is a passive warning system that triggers an alert based on the pilot’s pre-defined indicators of distance and time proximities at the closest point of approaches in encounters with nearby vessels. To better help pilot in decision making in close quarter situations, collision risk should be considered as a continuous monotonic function of the proximities and risk perception should be considered probabilistically. This paper derives an ordered probit regression model to study perceived collision risks. To illustrate the procedure, the risks perceived by Singapore port pilots were obtained to calibrate the regression model. The results demonstrate that a framework based on the probabilistic risk assessment model can be used to give a better understanding of collision risk and to define a more appropriate level of evasive actions.
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
Companies face the challenges of expanding their markets, improving products, services and processes, and exploiting intellectual capital in a dynamic network. Therefore, more companies are turning to an Enterprise System (ES). Knowledge management (KM) has also received considerable attention and is continuously gaining the interest of industry, enterprises, and academia. For ES, KM can provide support across the entire lifecycle, from selection and implementation to use. In addition, it is also recognised that an ontology is an appropriate methodology to accomplish a common consensus of communication, as well as to support a diversity of KM activities, such as knowledge repository, retrieval, sharing, and dissemination. This paper examines the role of ontology-based KM for ES (OKES) and investigates the possible integration of ontology-based KM and ES. The authors develop a taxonomy as a framework for understanding OKES research. In order to achieve the objective of this study, a systematic review of existing research was conducted. Based on a theoretical framework of the ES lifecycle, KM, KM for ES, ontology, and ontology-based KM, guided by the framework of study, a taxonomy for OKES is established.
Resumo:
With the continued development of renewable energy generation technologies and increasing pressure to combat the global effects of greenhouse warming, plug-in hybrid electric vehicles (PHEVs) have received worldwide attention, finding applications in North America and Europe. When a large number of PHEVs are introduced into a power system, there will be extensive impacts on power system planning and operation, as well as on electricity market development. It is therefore necessary to properly control PHEV charging and discharging behaviors. Given this background, a new unit commitment model and its solution method that takes into account the optimal PHEV charging and discharging controls is presented in this paper. A 10-unit and 24-hour unit commitment (UC) problem is employed to demonstrate the feasibility and efficiency of the developed method, and the impacts of the wide applications of PHEVs on the operating costs and the emission of the power system are studied. Case studies are also carried out to investigate the impacts of different PHEV penetration levels and different PHEV charging modes on the results of the UC problem. A 100-unit system is employed for further analysis on the impacts of PHEVs on the UC problem in a larger system application. Simulation results demonstrate that the employment of optimized PHEV charging and discharging modes is very helpful for smoothing the load curve profile and enhancing the ability of the power system to accommodate more PHEVs. Furthermore, an optimal Vehicle to Grid (V2G) discharging control provides economic and efficient backups and spinning reserves for the secure and economic operation of the power system
Resumo:
Unmanned Aircraft Systems (UAS) describe a diverse range of aircraft that are operated without a human pilot on-board. Unmanned aircraft range from small rotorcraft, which can fit in the palm of your hand, through to fixed wing aircraft comparable in size to that of a commercial passenger jet. The absence of a pilot on-board allows these aircraft to be developed with unique performance capabilities facilitating a wide range of applications in surveillance, environmental management, agriculture, defence, and search and rescue. However, regulations relating to the safe design and operation of UAS first need to be developed before the many potential benefits from these applications can be realised. According to the International Civil Aviation Organization (ICAO), a Risk Management Process (RMP) should support all civil aviation policy and rulemaking activities (ICAO 2009). The RMP is described in International standard, ISO 31000:2009 (ISO, 2009a). This standard is intentionally generic and high-level, providing limited guidance on how it can be effectively applied to complex socio-technical decision problems such as the development of regulations for UAS. Through the application of principles and tools drawn from systems philosophy and systems engineering, this thesis explores how the RMP can be effectively applied to support the development of safety regulations for UAS. A sound systems-theoretic foundation for the RMP is presented in this thesis. Using the case-study scenario of a UAS operation over an inhabited area and through the novel application of principles drawn from general systems modelling philosophy, a consolidated framework of the definitions of the concepts of: safe, risk and hazard is made. The framework is novel in that it facilitates the representation of broader subjective factors in an assessment of the safety of a system; describes the issues associated with the specification of a system-boundary; makes explicit the hierarchical nature of the relationship between the concepts and the subsequent constraints that exist between them; and can be evaluated using a range of analytic or deliberative modelling techniques. Following the general sequence of the RMP, the thesis explores the issues associated with the quantified specification of safety criteria for UAS. A novel risk analysis tool is presented. In contrast to existing risk tools, the analysis tool presented in this thesis quantifiably characterises both the societal and individual risk of UAS operations as a function of the flight path of the aircraft. A novel structuring of the risk evaluation and risk treatment decision processes is then proposed. The structuring is achieved through the application of the Decision Support Problem Technique; a modelling approach that has been previously used to effectively model complex engineering design processes and to support decision-making in relation to airspace design. The final contribution made by this thesis is in the development of an airworthiness regulatory framework for civil UAS. A novel "airworthiness certification matrix" is proposed as a basis for the definition of UAS "Part 21" regulations. The outcome airworthiness certification matrix provides a flexible, systematic and justifiable method for promulgating airworthiness regulations for UAS. In addition, an approach for deriving "Part 1309" regulations for UAS is presented. In contrast to existing approaches, the approach presented in this thesis facilitates a traceable and objective tailoring of system-level reliability requirements across the diverse range of UAS operations. The significance of the research contained in this thesis is clearly demonstrated by its practical real world outcomes. Industry regulatory development groups and the Civil Aviation Safety Authority have endorsed the proposed airworthiness certification matrix. The risk models have also been used to support research undertaken by the Australian Department of Defence. Ultimately, it is hoped that the outcomes from this research will play a significant part in the shaping of regulations for civil UAS, here in Australia and around the world.
Resumo:
Food modelling systems such as the Core Foods and the Australian Guide to Healthy Eating are frequently used as nutritional assessment tools for menus in ‘well’ groups (such as boarding schools, prisons and mental health facilities), with the draft Foundation and Total Diets (FATD) the latest revision. The aim of this paper is to apply the FATD to an assessment of food provision in a long stay, ‘well’, group setting to determine its usefulness as a tool. A detailed menu review was conducted in a 1000 bed male prison, including verification of all recipes. Full diet histories were collected on 106 prisoners which included foods consumed from the menu and self funded snacks. Both the menu and diet histories were analysed according to core foods, with recipes used to assist in quantification of mixed dishes. Comparison was made of average core foods with Foundation Diet recommendations (FDR) for males. Results showed that the standard menu provided sufficient quantity for 8 of 13 FDRs, however was low in nuts, legumes, refined cereals and marginally low in fruits and orange vegetables. The average prisoner diet achieved 9 of 13 FDRs, notably with margarines and oils less than half and legumes one seventh of recommended. Overall, although the menu and prisoner diets could easily be assessed using the FDRs, it was not consistent with recommendations. In long stay settings other Nutrient Reference Values not modelled in the FATDS need consideration, in particular, Suggested Dietary Targets and professional judgement is required in interpretation.