514 resultados para uses
Resumo:
Approximately 20 years have passed now since the NTSB issued its original recommendation to expedite development, certification and production of low-cost proximity warning and conflict detection systems for general aviation [1]. While some systems are in place (TCAS [2]), see-and-avoid remains the primary means of separation between light aircrafts sharing the national airspace. The requirement for a collision avoidance or sense-and-avoid capability onboard unmanned aircraft has been identified by leading government, industry and regulatory bodies as one of the most significant challenges facing the routine operation of unmanned aerial systems (UAS) in the national airspace system (NAS) [3, 4]. In this thesis, we propose and develop a novel image-based collision avoidance system to detect and avoid an upcoming conflict scenario (with an intruder) without first estimating or filtering range. The proposed collision avoidance system (CAS) uses relative bearing and angular-area subtended , estimated from an image, to form a test statistic AS C . This test statistic is used in a thresholding technique to decide if a conflict scenario is imminent. If deemed necessary, the system will command the aircraft to perform a manoeuvre based on and constrained by the CAS sensor field-of-view. Through the use of a simulation environment where the UAS is mathematically modelled and a flight controller developed, we show that using Monte Carlo simulations a probability of a Mid Air Collision (MAC) MAC RR or a Near Mid Air Collision (NMAC) RiskRatio can be estimated. We also show the performance gain this system has over a simplified version (bearings-only ). This performance gain is demonstrated in the form of a standard operating characteristic curve. Finally, it is shown that the proposed CAS performs at a level comparable to current manned aviations equivalent level of safety (ELOS) expectations for Class E airspace. In some cases, the CAS may be oversensitive in manoeuvring the owncraft when not necessary, but this constitutes a more conservative and therefore safer, flying procedures in most instances.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research including a description of the research problem, the literature review and an account of the research progress linking the research papers is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
Player experience of spatiality in first-person, single-player games is informed by the maps and navigational aids provided by the game. This project uses textual analysis to examine the way these maps and navigational aids inform the experience of spatiality in Fallout 3, BioShock and BioShock 2. Spatiality is understood as trialectic, incorporating perceived, conceived and lived space, drawing on the work of Henri Lefebvre and Edward Soja. The most prominent elements of the games maps and navigational aids are analysed in terms of how they inform players experience of the games spaces. In particular this project examines the in-game maps these games incorporate, the waypoint navigation and fast-travel systems in Fallout 3, and the guide arrow and environmental cues in the BioShock games.
Resumo:
This paper describes modelling, estimation and control of the horizontal translational motion of an open-source and cost effective quadcopter the MikroKopter. We determine the dynamics of its roll and pitch attitude controller, system latencies, and the units associated with the values exchanged with the vehicle over its serial port. Using this we create a horizontal-plane velocity estimator that uses data from the built-in inertial sensors and an onboard laser scanner, and implement translational control using a nested control loop architecture. We present experimental results for the model and estimator, as well as closed-loop positioning.
Resumo:
In public places, crowd size may be an indicator of congestion, delay, instability, or of abnormal events, such as a fight, riot or emergency. Crowd related information can also provide important business intelligence such as the distribution of people throughout spaces, throughput rates, and local densities. A major drawback of many crowd counting approaches is their reliance on large numbers of holistic features, training data requirements of hundreds or thousands of frames per camera, and that each camera must be trained separately. This makes deployment in large multi-camera environments such as shopping centres very costly and difficult. In this chapter, we present a novel scene-invariant crowd counting algorithm that uses local features to monitor crowd size. The use of local features allows the proposed algorithm to calculate local occupancy statistics, scale to conditions which are unseen in the training data, and be trained on significantly less data. Scene invariance is achieved through the use of camera calibration, allowing the system to be trained on one or more viewpoints and then deployed on any number of new cameras for testing without further training. A pre-trained system could then be used as a turn-key solution for crowd counting across a wide range of environments, eliminating many of the costly barriers to deployment which currently exist.
Resumo:
This paper presents a Six Sigma case study analysis involving three service organizations of Singapore. The organizations are a local hospital, a construction and related engineering service, and a consultancy service. These organizations embarked on their Six Sigma journey around 2003-2004. Though the hospital was slightly ahead than the other two in beginning Six Sigma. These organizations have since achieved significant service improvements through implementation of Six Sigma to their different divisions. Through a series of structured interviews with Six Sigma project champions, team leaders, and members; project reports; public archives; and observations; this study explores the Six Sigma journey of these organizations. The results portray a list of success factors which led to the Six Sigma initiatives, the process of Six Sigma implementation through proper identification of critical-to-quality characteristics, tools and techniques, and the performance indicators which display the improvements due to Six Sigma.
Resumo:
The history of public discourse (and in many cases, academic publishing) on pornography is, notoriously, largely polemical and polarised. There is perhaps no other media form that has been so relentlessly the centre of what boils down to little more than arguments for or against; most famously, on the basis of the oppression, dominance or liberation of sexual subjectivities. These polarised debates leave much conceptual space for researchers to explore: discussions of pornography often lack specificity (when speaking of porn, what exactly do we mean? Which genre? Which markets?); assumptions (eg. about exactly how the sexualised white male body functions culturally, or what the uses of porn actually might be) can be buried; and empirical opportunities (how porn as media industry connects to innovation and the rest of the mediasphere) are missed. In this issue, we have tried to create and populate such a space, not only for the rethinking of some of our core assumptions about pornography, but also for the treatment of pornography as a bona fide, even while contested and problematic, segment of the media and cultural industries, linked economically and symbolically to other media forms.
Resumo:
Resilient organised crime groups survive and prosper despite law enforcement activity, criminal competition and market forces. Corrupt police networks, like any other crime network, must contain resiliency characteristics if they are to continue operation and avoid being closed down through detection and arrest of their members. This paper examines the resilience of a large corrupt police network, namely The Joke which operated in the Australian state of Queensland for a number of decades. The paper uses social network analysis tools to determine the resilient characteristics of the network. This paper also assumes that these characteristics will be different to those of mainstream organised crime groups because the police network operates within an established policing agency rather than as an independent entity hiding within the broader community.
Resumo:
Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). As background to this challenge, a brief review is given of current practice in the selection of major public sector infrastructure in Australia, along with a review of the related literature concerning the Multi-Attribute Utility Approach (MAUA) and the effect of MAUA on the role of risk management in procurement selection. To contribute towards addressing the key weaknesses of MAUA, a new first-order procurement decision making model is mentioned. A brief summary is also given of the research method and hypothesis used to test and develop the new procurement model and which uses competition as the dependent variable and as a proxy for VfM. The hypothesis is given as follows: When the actual procurement mode matches the theoretical/predicted procurement mode (informed by the new procurement model), then actual competition is expected to match optimum competition (based on actual prevailing capacity vis--vis the theoretical/predicted procurement mode) and subject to efficient tendering. The aim of this paper is to report on progress towards testing this hypothesis in terms of an analysis of two of the four data components in the hypothesis. That is, actual procurement and actual competition across 87 road and health major public sector projects in Australia. In conclusion, it is noted that the Global Financial Crisis (GFC) has seen a significant increase in competition in public sector major road and health infrastructure and if any imperfections in procurement and/or tendering are discernible, then this would create the opportunity, through the deployment of economic principles embedded in the new procurement model and/or adjustments in tendering, to maintain some of this higher level post-GFC competition throughout the next business cycle/upturn in demand including private sector demand. Finally, the paper previews the next steps in the research with regard to collection and analysis of data concerning theoretical/predicted procurement and optimum competition.
Resumo:
On the road, near collision events (also close calls or near-miss incidents) largely outnumber actual crashes, yet most of them can never be recorded by current traffic data collection technologies or crashes analysis tools. The analysis of near collisions data is an important step in the process of reducing the crash rate. There have been several studies that have investigated near collisions; to our knowledge, this is the first study that uses the functionalities provided by cooperative vehicles to collect near misses information. We use the VISSIM traffic simulator and a custom C++ engine to simulate cooperative vehicles and their ability to detect near collision events. Our results showed that, within a simple simulated environment, adequate information on near collision events can be collected using the functionalities of cooperative perception systems. The relationship between the ratio of detected events and the ratio of equipped vehicle was shown to closely follow a squared law, and the largest source of nondetection was packet loss instead of packet delays and GPS imprecision.
Resumo:
Partition of heavy metals between particulate and dissolve fraction of stormwater primarily depends on the adsorption characteristics of solids particles. Moreover, the bioavailability of heavy metals is also influenced by the adsorption behaviour of solids. However, due to the lack of fundamental knowledge in relation to the heavy metals adsorption processes of road deposited solids, the effectiveness of stormwater management strategies can be limited. The research study focused on the investigation of the physical and chemical parameters of solids on urban road surfaces and, more specifically, on heavy metal adsorption to solids. Due to the complex nature of heavy metal interaction with solids, a substantial database was generated through a series of field investigations and laboratory experiments. The study sites for the build-up pollutant sample collection were selected from four urbanised suburbs located in a major river catchment. Sixteen road sites were selected from these suburbs and represented typical industrial, commercial and residential land uses. Build-up pollutants were collected using a wet and dry vacuum collection technique which was specially designed to improve fine particle collection. Roadside soil samples were also collected from each suburb for comparison with the road surface solids. The collected build-up solids samples were separated into four particle size ranges and tested for a range of physical and chemical parameters. The solids build-up on road surfaces contained a high fraction (70%) of particles smaller than 150m, which are favourable for heavy metal adsorption. These solids particles predominantly consist of soil derived minerals which included quartz, albite, microcline, muscovite and chlorite. Additionally, a high percentage of amorphous content was also identified in road deposited solids. In comparing the mineralogical data of surrounding soil and road deposited solids, it was found that about 30% of the solids consisted of particles generated from traffic related activities on road surfaces. Significant difference in mineralogical composition was noted in different particle sizes of build-up solids. Fine solids particles (<150m) consisted of a clayey matrix and high amorphous content (in the region of 40%) while coarse particles (>150m) consisted of a sandy matrix at all study sites, with about 60% quartz content. Due to these differences in mineralogical components, particles larger than and smaller than 150m had significant differences in their specific surface area (SSA) and effective cation exchange capacity (ECEC). These parameters, in turn, exert a significant influence on heavy metal adsorption. Consequently, heavy metal content in >150m particles was lower than in the case of fine particles. The particle size range <75m had the highest heavy metal content, corresponding with its high clay forming minerals, high organic matter and low quartz content which increased the SSA, ECEC and the presence of Fe, Al and Mn oxides. The clay forming minerals, high organic matter and Fe, Al and Mn oxides create distinct groups of charge sites on solids surfaces and exhibit different adsorption mechanisms and bond strength, between heavy metal elements and charge sites. Therefore, the predominance of these factors in different particle sizes leads to different heavy metal adsorption characteristics. Heavy metals show preference for association with clay forming minerals in fine solids particles, whilst in coarse particles heavy metals preferentially associate with organic matter. Although heavy metal adsorption to amorphous material is very low, the heavy metals embedded in traffic related materials have a potential impact on stormwater quality.Adsorption of heavy metals is not confined to an individual type of charge site in solids, whereas specific heavy metal elements show preference for adsorption to several different types of charge sites in solids. This is attributed to the dearth of preferred binding sites and the inability to reach the preferred binding sites due to competition between different heavy metal species. This confirms that heavy metal adsorption is significantly influenced by the physical and chemical parameters of solids that lead to a heterogeneity of surface charge sites. The research study highlighted the importance of removal of solids particles from stormwater runoff before they enter into receiving waters to reduce the potential risk posed by the bioavailability of heavy metals. The bioavailability of heavy metals not only results from the easily mobile fraction bound to the solids particles, but can also occur as a result of the dissolution of other forms of bonds by chemical changes in stormwater or microbial activity. Due to the diversity in the composition of the different particle sizes of solids and the characteristics and amount of charge sites on the particle surfaces, investigations using bulk solids are not adequate to gain an understanding of the heavy metal adsorption processes of solids particles. Therefore, the investigation of different particle size ranges is recommended for enhancing stormwater quality management practices.
Resumo:
Organisations within the not-for-profit sector provide services to individuals and groups that government and for-profit organisations cannot or will not consider. The not-for-profit sector has come to be a vibrant and rich agglomeration of services and programs that operate under a myriad of philosophical stances, service orientation, client groupings and operational capacities. In Australia these organisations and services are providing social support and service assistance to many people in the community; often targeting their assistance to the most difficult of clients. Initially, in undertaking this role, the not-for-profit sector received limited sponsorship from government. Over time governments assumed greater responsibility in the form of service grants to particular groups: the worthy poor. More recently, they have entered into contractual service agreements with the not-for-profit sector, which specify the nature of the outcomes to be achieved and, to a degree, the way in which the services will be provided. A consequence of this growing shift to a more marketised model of service contracting, often offered-up under the label of enhanced collaborative practice, has been increased competitiveness between agencies that had previously worked well together (Keast and Brown, 2006). Another trend emerging from the market approach is the entrance of for-profit providers. These larger organisations have higher levels of organisational capacity with considerable organisational slack to allow them to adopt new service roles. Shaped almost as shadow governments they appear to be a strong preference for governments looking for greater accountability of outcomes and an easier way to control the interaction with the conventional not-for-profit sector. The question is will governments apparent preference for larger organisational arrangements lead to the demise of the vibrancy of the not-for-profit sector and impact on service provision to those people who fall outside of the remit of the new service providers? To address this issue, this paper uses information gleaned from a state-wide survey of not-for-profit organisations in Queensland, Australia which included organisational size, operational scope, funding arrangements and governance/management approaches. Supplementing this information is qualitative data derived from 17 focus groups and 120 interviews conducted over ten years of study of this sector. The findings contribute to greater understanding of the practice and theory of the future provision of social services.
Resumo:
This paper describes a vision-based airborne collision avoidance system developed by the Australian Research Centre for Aerospace Automation (ARCAA) under its Dynamic Sense-and-Act (DSA) program. We outline the system architecture and the flight testing undertaken to validate the system performance under realistic collision course scenarios. The proposed system could be implemented in either manned or unmanned aircraft, and represents a step forward in the development of a sense-and-avoid capability equivalent to human see-and-avoid.
Resumo:
We address the problem of constructing randomized online algorithms for the Metrical Task Systems (MTS) problem on a metric against an oblivious adversary. Restricting our attention to the class of work-based algorithms, we provide a framework for designing algorithms that uses the technique of regularization. For the case when is a uniform metric, we exhibit two algorithms that arise from this framework, and we prove a bound on the competitive ratio of each. We show that the second of these algorithms is ln n+O(loglogn) competitive, which is the current state-of-the art for the uniform MTS problem.
Resumo:
The health system is one sector dealing with a deluge of complex data. Many healthcare organisations struggle to utilise these volumes of health data effectively and efficiently. Also, there are many healthcare organisations, which still have stand-alone systems, not integrated for management of information and decision-making. This shows, there is a need for an effective system to capture, collate and distribute this health data. Therefore, implementing the data warehouse concept in healthcare is potentially one of the solutions to integrate health data. Data warehousing has been used to support business intelligence and decision-making in many other sectors such as the engineering, defence and retail sectors. The research problem that is going to be addressed is, "how can data warehousing assist the decision-making process in healthcare". To address this problem the researcher has narrowed an investigation focusing on a cardiac surgery unit. This research used the cardiac surgery unit at the Prince Charles Hospital (TPCH) as the case study. The cardiac surgery unit at TPCH uses a stand-alone database of patient clinical data, which supports clinical audit, service management and research functions. However, much of the time, the interaction between the cardiac surgery unit information system with other units is minimal. There is a limited and basic two-way interaction with other clinical and administrative databases at TPCH which support decision-making processes. The aims of this research are to investigate what decision-making issues are faced by the healthcare professionals with the current information systems and how decision-making might be improved within this healthcare setting by implementing an aligned data warehouse model or models. As a part of the research the researcher will propose and develop a suitable data warehouse prototype based on the cardiac surgery unit needs and integrating the Intensive Care Unit database, Clinical Costing unit database (Transition II) and Quality and Safety unit database [electronic discharge summary (e-DS)]. The goal is to improve the current decision-making processes. The main objectives of this research are to improve access to integrated clinical and financial data, providing potentially better information for decision-making for both improved from the questionnaire and by referring to the literature, the results indicate a centralised data warehouse model for the cardiac surgery unit at this stage. A centralised data warehouse model addresses current needs and can also be upgraded to an enterprise wide warehouse model or federated data warehouse model as discussed in the many consulted publications. The data warehouse prototype was able to be developed using SAS enterprise data integration studio 4.2 and the data was analysed using SAS enterprise edition 4.3. In the final stage, the data warehouse prototype was evaluated by collecting feedback from the end users. This was achieved by using output created from the data warehouse prototype as examples of the data desired and possible in a data warehouse environment. According to the feedback collected from the end users, implementation of a data warehouse was seen to be a useful tool to inform management options, provide a more complete representation of factors related to a decision scenario and potentially reduce information product development time. However, there are many constraints exist in this research. For example the technical issues such as data incompatibilities, integration of the cardiac surgery database and e-DS database servers and also, Queensland Health information restrictions (Queensland Health information related policies, patient data confidentiality and ethics requirements), limited availability of support from IT technical staff and time restrictions. These factors have influenced the process for the warehouse model development, necessitating an incremental approach. This highlights the presence of many practical barriers to data warehousing and integration at the clinical service level. Limitations included the use of a small convenience sample of survey respondents, and a single site case report study design. As mentioned previously, the proposed data warehouse is a prototype and was developed using only four database repositories. Despite this constraint, the research demonstrates that by implementing a data warehouse at the service level, decision-making is supported and data quality issues related to access and availability can be reduced, providing many benefits. Output reports produced from the data warehouse prototype demonstrated usefulness for the improvement of decision-making in the management of clinical services, and quality and safety monitoring for better clinical care. However, in the future, the centralised model selected can be upgraded to an enterprise wide architecture by integrating with additional hospital units databases.