952 resultados para Wide-Area Measurements


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spatially offset Raman spectroscopy (SORS) is a powerful new technique for the non-invasive detection and identification of concealed substances and drugs. Here, we demonstrate the SORS technique in several scenarios that are relevant to customs screening, postal screening, drug detection and forensics applications. The examples include analysis of a multi-layered postal package to identify a concealed substance; identification of an antibiotic capsule inside its plastic blister pack; analysis of an envelope containing a powder; and identification of a drug dissolved in a clear solvent, contained in a non-transparent plastic bottle. As well as providing practical examples of SORS, the results highlight several considerations regarding the use of SORS in the field, including the advantages of different analysis geometries and the ability to tailor instrument parameters and optics to suit different types of packages and samples. We also discuss the features and benefits of SORS in relation to existing Raman techniques, including confocal microscopy, wide area illumination and the conventional backscattered Raman spectroscopy. The results will contribute to the recognition of SORS as a promising method for the rapid, chemically-specific analysis and detection of drugs and pharmaceuticals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Australia is rich in renewable energy resources such as wind, solar and geothermal. Geographical diversity of these renewable resources combined with developing climate change policies poses a great challenge for the long term interconnection planning. Intermittency of wind and solar potentially driving the development of new transmission lines bring additional complexity to power system operations and planning. This paper provides an overview of generation and transmission planning studies in Australia to meet 20% renewable energy target by 2020. Appraisal of the effectiveness of dispersed energy storage, non schedulable peaking plants, wide area controls and demand management techniques to aid the penetration of renewables is presented in this paper

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Critical road infrastructure (such as tunnels and overpasses) is of major significance to society and constitutes major components of interdependent, ‘systems and networks’. Failure in critical components of these wide area infrastructure systems can often result in cascading disturbances with secondary and tertiary impacts - some of which may become initiating sources of failure in their own right, triggering further systems failures across wider networks. Perrow1) considered the impact of our increasing use of technology in high-risk fields, analysing the implications on everyday life and argued that designers of these types of infrastructure systems cannot predict every possible failure scenario nor create perfect contingency plans for operators. Challenges exist for transport system operators in the conceptualisation and implementation of response and subsequent recovery planning for significant events. Disturbances can vary from reduced traffic flow causing traffic congestion throughout the local road network(s) and subsequent possible loss of income to businesses and industry to a major incident causing loss of life or complete loss of an asset. Many organisations and institutions, despite increasing recognition of the effects of crisis events, are not adequately prepared to manage crises2). It is argued that operators of land transport infrastructure are in a similar category of readiness given the recent instances of failures in road tunnels. These unexpected infrastructure failures, and their ultimately identified causes, suggest there is significant room for improvement. As a result, risk profiles for road transport systems are often complex due to the human behaviours and the inter-mix of technical and organisational components and the managerial coverage needed for the socio-technical components and the physical infrastructure. In this sense, the span of managerial oversight may require new approaches to asset management that combines the notion of risk and continuity management. This paper examines challenges in the planning of response and recovery practices of owner/operators of transport systems (above and below ground) in Australia covering: • Ageing or established infrastructure; and • New-build infrastructure. With reference to relevant international contexts this paper seeks to suggest options for enhancing the planning and practice for crisis response in these transport networks and as a result support the resilience of Critical Infrastructure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Remote Sensing Core Curriculum (RSCC) was initiated in 1993 to meet the demands for a college-level set of resources to enhance the quality of education across national and international campuses. The American Society of Photogrammetry and Remote Sensing adopted the RSCC in 1996 to sustain support of this educational initiative for its membership and collegiate community. A series of volumes, containing lectures, exercises, and data, is being created by expert contributors to address the different technical fields of remote sensing. The RSCC program is designed to operate on the Internet taking full advantage of the World Wide Web (WWW) technology for distance learning. The issues of curriculum development related to the educational setting, with demands on faculty, students, and facilities, is considered to understand the new paradigms for WWW-influenced computer-aided learning. The WWW is shown to be especially appropriate for facilitating remote sensing education with requirements for addressing image data sets and multimedia learning tools. The RSCC is located at http://www.umbc.edu/rscc. The Remote Sensing Core Curriculum (RSCC) was initiated in 1993 to meet the demands for a college-level set of resources to enhance the quality of education across national and international campuses. The American Society of Photogrammetry and Remote Sensing adopted the RSCC in 1996 to sustain support of this educational initiative for its membership and collegiate community. A series of volumes, containing lectures, exercises, and data, is being created by expert contributors to address the different technical fields of remote sensing. The RSCC program is designed to operate on the Internet taking full advantage of the World Wide Web (WWW) technology for distance learning. The issues of curriculum development related to the educational setting, with demands on faculty, students, and facilities, is considered to understand the new paradigms for WWW-influenced computer-aided learning. The WWW is shown to be especially appropriate for facilitating remote sensing education with requirements for addressing image data sets and multimedia learning tools. The RSCC is located at http://www.umbc.edu/rscc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study presents a general approach to identify dominant oscillation modes in bulk power system by using wide-area measurement system. To automatically identify the dominant modes without artificial participation, spectral characteristic of power system oscillation mode is applied to distinguish electromechanical oscillation modes which are calculated by stochastic subspace method, and a proposed mode matching pursuit is adopted to discriminate the dominant modes from the trivial modes, then stepwise-refinement scheme is developed to remove outliers of the dominant modes and the highly accurate dominant modes of identification are obtained. The method is implemented on the dominant modes of China Southern Power Grid which is one of the largest AC/DC paralleling grids in the world. Simulation data and field-measurement data are used to demonstrate high accuracy and better robustness of the dominant modes identification approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Provides an accessible foundation to Bayesian analysis using real world models This book aims to present an introduction to Bayesian modelling and computation, by considering real case studies drawn from diverse fields spanning ecology, health, genetics and finance. Each chapter comprises a description of the problem, the corresponding model, the computational method, results and inferences as well as the issues that arise in the implementation of these approaches. Case Studies in Bayesian Statistical Modelling and Analysis: •Illustrates how to do Bayesian analysis in a clear and concise manner using real-world problems. •Each chapter focuses on a real-world problem and describes the way in which the problem may be analysed using Bayesian methods. •Features approaches that can be used in a wide area of application, such as, health, the environment, genetics, information science, medicine, biology, industry and remote sensing. Case Studies in Bayesian Statistical Modelling and Analysis is aimed at statisticians, researchers and practitioners who have some expertise in statistical modelling and analysis, and some understanding of the basics of Bayesian statistics, but little experience in its application. Graduate students of statistics and biostatistics will also find this book beneficial.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Australian farmers have used precision agriculture technology for many years with the use of ground – based and satellite systems. However, these systems require the use of vehicles in order to analyse a wide area which can be time consuming and cost ineffective. Also, satellite imagery may not be accurate for analysis. Low cost of Unmanned Aerial Vehicles (UAV) present an effective method of analysing large plots of agricultural fields. As the UAV can travel over long distances and fly over multiple plots, it allows for more data to be captured by a sampling device such as a multispectral camera and analysed thereafter. This would allow farmers to analyse the health of their crops and thus focus their efforts on certain areas which may need attention. This project evaluates a multispectral camera for use on a UAV for agricultural applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vacuum pyrolysis of ammonium perchlorate (AP) and ammonium perchlorate/polystyrene (PS) propellant has been studied by differential thermal analysis (DTA) in order to observe the effect of transition metal oxides on sublimation. Sublimation and decomposition being competitive processes, their proportions depend on the pressure of the pyrolysis chamber. The enthalpies for complete decomposition and complete sublimation are available from the literature and by using these data together with DTA area measurements, the extents of sublimation and decomposition have been calculated for AP and the propellant system. The effect of the metal ions on the extent and rate of sublimation depends on their nature. For AP the extent of sublimation increases with a decrease in particle size. For the propellants the powder sublimes more readily than the bulk material, but in the presence of metal ions the bulk material sublimes more readily than the powder. To substantiate this finding, the effect of MnO2 on AP sublimation as a function of particle size was examined, and it was observed that the extent of sublimation decreases as the particle size decreases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper demonstrates the application of inverse filtering technique for power systems. In order to implement this method, the control objective should be based on a system variable that needs to be set on a specific value for each sampling time. A control input is calculated to generate the desired output of the plant and the relationship between the two is used design an auto-regressive model. The auto-regressive model is converted to a moving average model to calculate the control input based on the future values of the desired output. Therefore, required future values to construct the output are predicted to generate the appropriate control input for the next sampling time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Synthesis of fine particle α-alumina and related oxide materials such as MgAl2O4, CaAl2O4, Y3Al5O12 (YAG), Image , β′-alumina, LaAlO3 and ruby powder (Image ) has been achieved at low temperatures (500°C) by the combustion of corresponding metal nitrate-urea mixtures. Solid combustion products have been identified by their characteristic X-ray diffraction patterns. The fine particle nature of α-alumina and related oxide materials has been investigated using SEM, TEM, particle size analysis and surface area measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thermal decomposition of barium titanyl oxalate tetrahydrate (BTO) has been investigated employing TGA, DTG and DTA techniques and gas and chemical analysis. The decomposition proceeds through five steps and is not affected much by the surrounding gas atmosphere. The first step which is the dehydration of the tetrahydrate is followed by a low-temperature decomposition of the oxalate groups. In the temperature range 190–250°C half a mole of carbon monoxide is evolved with the formation of a transient intermediate containing both oxalate and carbonate groups. The oxalate groups are completely destroyed in the range 250–450°C, resulting in the formation of a carbonate which retains free carbon dioxide in the matrix. The trapped carbon dioxide is released in the temperature range of 460–600°C. The final decomposition of the carbonate takes place between 600–750°C and yields barium titanate. The i.r. spectra, surface area measurements and X-ray, powder diffraction data support entrapment of carbon dioxide in the matrix.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The discovery of graphene has aroused great interest in the properties and phenomena exhibited by two-dimensional inorganic materials, especially when they comprise only a single, two or a few layers. Graphene-like MoS2 and WS2 have been prepared by chemical methods, and the materials have been characterized by electron microscopy, atomic force microscopy (AFM) and other methods. Boron nitride analogues of graphene have been obtained by a simple chemical procedure starting with boric acid and urea and have been characterized by various techniques that include surface area measurements. A new layered material with the composition BCN possessing a few layers and a large surface area discovered recently exhibits a large uptake of CO2.