930 resultados para habitat data
Resumo:
Trees, shrubs and other vegetation are of continued importance to the environment and our daily life. They provide shade around our roads and houses, offer a habitat for birds and wildlife, and absorb air pollutants. However, vegetation touching power lines is a risk to public safety and the environment, and one of the main causes of power supply problems. Vegetation management, which includes tree trimming and vegetation control, is a significant cost component of the maintenance of electrical infrastructure. For example, Ergon Energy, the Australia’s largest geographic footprint energy distributor, currently spends over $80 million a year inspecting and managing vegetation that encroach on power line assets. Currently, most vegetation management programs for distribution systems are calendar-based ground patrol. However, calendar-based inspection by linesman is labour-intensive, time consuming and expensive. It also results in some zones being trimmed more frequently than needed and others not cut often enough. Moreover, it’s seldom practicable to measure all the plants around power line corridors by field methods. Remote sensing data captured from airborne sensors has great potential in assisting vegetation management in power line corridors. This thesis presented a comprehensive study on using spiking neural networks in a specific image analysis application: power line corridor monitoring. Theoretically, the thesis focuses on a biologically inspired spiking cortical model: pulse coupled neural network (PCNN). The original PCNN model was simplified in order to better analyze the pulse dynamics and control the performance. Some new and effective algorithms were developed based on the proposed spiking cortical model for object detection, image segmentation and invariant feature extraction. The developed algorithms were evaluated in a number of experiments using real image data collected from our flight trails. The experimental results demonstrated the effectiveness and advantages of spiking neural networks in image processing tasks. Operationally, the knowledge gained from this research project offers a good reference to our industry partner (i.e. Ergon Energy) and other energy utilities who wants to improve their vegetation management activities. The novel approaches described in this thesis showed the potential of using the cutting edge sensor technologies and intelligent computing techniques in improve power line corridor monitoring. The lessons learnt from this project are also expected to increase the confidence of energy companies to move from traditional vegetation management strategy to a more automated, accurate and cost-effective solution using aerial remote sensing techniques.
Resumo:
High levels of sitting have been linked with poor health outcomes. Previously a pragmatic MTI accelerometer data cut-point (100 count/min-1) has been used to estimate sitting. Data on the accuracy of this cut-point is unavailable. PURPOSE: To ascertain whether the 100 count/min-1 cut-point accurately isolates sitting from standing activities. METHODS: Participants fitted with an MTI accelerometer were observed performing a range of sitting, standing, light & moderate activities. 1-min epoch MTI data were matched to observed activities, then re-categorized as either sitting or not using the 100 count/min-1 cut-point. Self-report demographics and current physical activity were collected. Generalized estimating equation for repeated measures with a binary logistic model analyses (GEE), corrected for age, gender and BMI, were conducted to ascertain the odds of the MTI data being misclassified. RESULTS: Data were from 26 healthy subjects (8 men; 50% aged <25 years; mean BMI (SD) 22.7(3.8)m/kg2). MTI sitting and standing data mode was 0 count/min-1, with 46% of sitting activities and 21% of standing activities recording 0 count/min-1. The GEE was unable to accurately isolate sitting from standing activities using the 100 count/min-1 cut-point, since all sitting activities were incorrectly predicted as standing (p=0.05). To further explore the sensitivity of MTI data to delineate sitting from standing, the upper 95% confidence interval of the mean for the sitting activities (46 count/min-1) was used to re-categorise the data; this resulted in the GEE correctly classifying 49% of sitting, and 69% of standing activities. Using the 100 count/min-1 cut-point the data were re-categorised into a combined ‘sit/stand’ category and tested against other light activities: 88% of sit/stand and 87% of light activities were accurately predicted. Using Freedson’s moderate cut-point of 1952 count/min-1 the GEE accurately predicted 97% of light vs. 90% of moderate activities. CONCLUSION: The distributions of MTI recorded sitting and standing data overlap considerably, as such the 100 count/min -1 cut-point did not accurately isolate sitting from other static standing activities. The 100 count/min -1 cut-point more accurately predicted sit/stand vs. other movement orientated activities.
Resumo:
The aim of this study is to assess the potential use of Bluetooth data for traffic monitoring of arterial road networks. Bluetooth data provides the direct measurement of travel time between pairs of scanners, and intensive research has been reported on this topic. Bluetooth data includes “Duration” data, which represents the time spent by Bluetooth devices to pass through the detection range of Bluetooth scanners. If the scanners are located at signalised intersections, this Duration can be related to intersection performance, and hence represents valuable information for traffic monitoring. However the use of Duration has been ignored in previous analyses. In this study, the Duration data as well as travel time data is analysed to capture the traffic condition of a main arterial route in Brisbane. The data consists of one week of Bluetooth data provided by Brisbane City Council. As well, micro simulation analysis is conducted to further investigate the properties of Duration. The results reveal characteristics of Duration, and address future research needs to utilise this valuable data source.
Resumo:
Traffic Simulation models tend to have their own data input and output formats. In an effort to standardise the input for traffic simulations, we introduce in this paper a set of data marts that aim to serve as a common interface between the necessaary data, stored in dedicated databases, and the swoftware packages, that require the input in a certain format. The data marts are developed based on real world objects (e.g. roads, traffic lights, controllers) rather than abstract models and hence contain all necessary information that can be transformed by the importing software package to their needs. The paper contains a full description of the data marts for network coding, simulation results, and scenario management, which have been discussed with industry partners to ensure sustainability.
Resumo:
In response to the need to leverage private finance and the lack of competition in some parts of the Australian public sector major infrastructure market, especially in very large economic infrastructure procured using Pubic Private Partnerships, the Australian Federal government has demonstrated its desire to attract new sources of in-bound foreign direct investment (FDI) into the Australian construction market. This paper aims to report on progress towards an investigation into the determinants of multinational contractors’ willingness to bid for Australian public sector major infrastructure projects and which is designed to give an improved understanding of matters surrounding FDI into the Australian construction sector. This research deploys Dunning’s eclectic theory for the first time in terms of in-bound FDI by multinational contractors and as head contractors bidding for Australian major infrastructure public sector projects. Elsewhere, the authors have developed Dunning’s principal hypothesis associated with his eclectic framework in order to suit the context of this research and to address a weakness arising in Dunning’s principal hypothesis that is based on a nominal approach to the factors in the eclectic framework and which fail to speak to the relative explanatory power of these factors. In this paper, an approach to reviewing and analysing secondary data, as part of the first stage investigation in this research, is developed and some illustrations given, vis-à-vis the selected sector (roads, bridges and tunnels) in Australia (as the host location) and using one of the selected home countries (Spain). In conclusion, some tentative thoughts are offered in anticipation of the completion of the first stage investigation - in terms of the extent to which this first stage based on secondary data only might suggest the relative importance of the factors in the eclectic framework. It is noted that more robust conclusions are expected following the future planned stages of the research and these stages including primary data are briefly outlined. Finally, and beyond theoretical contributions expected from the overall approach taken to developing and testing Dunning’s framework, other expected contributions concerning research method and practical implications are mentioned.
Resumo:
Researchers are increasingly involved in data-intensive research projects that cut across geographic and disciplinary borders. Quality research now often involves virtual communities of researchers participating in large-scale web-based collaborations, opening their earlystage research to the research community in order to encourage broader participation and accelerate discoveries. The result of such large-scale collaborations has been the production of ever-increasing amounts of data. In short, we are in the midst of a data deluge. Accompanying these developments has been a growing recognition that if the benefits of enhanced access to research are to be realised, it will be necessary to develop the systems and services that enable data to be managed and secured. It has also become apparent that to achieve seamless access to data it is necessary not only to adopt appropriate technical standards, practices and architecture, but also to develop legal frameworks that facilitate access to and use of research data. This chapter provides an overview of the current research landscape in Australia as it relates to the collection, management and sharing of research data. The chapter then explains the Australian legal regimes relevant to data, including copyright, patent, privacy, confidentiality and contract law. Finally, this chapter proposes the infrastructure elements that are required for the proper management of legal interests, ownership rights and rights to access and use data collected or generated by research projects.
Resumo:
This report provides an evaluation of the current available evidence-base for identification and surveillance of product-related injuries in children in Queensland. While the focal population was children in Queensland, the identification of information needs and data sources for product safety surveillance has applicability nationally for all age groups. The report firstly summarises the data needs of product safety regulators regarding product-related injury in children, describing the current sources of information informing product safety policy and practice, and documenting the priority product surveillance areas affecting children which have been a focus over recent years in Queensland. Health data sources in Queensland which have the potential to inform product safety surveillance initiatives were evaluated in terms of their ability to address the information needs of product safety regulators. Patterns in product-related injuries in children were analysed using routinely available health data to identify areas for future intervention, and the patterns in product-related injuries in children identified in health data were compared to those identified by product safety regulators. Recommendations were made for information system improvements and improved access to and utilisation of health data for more proactive approaches to product safety surveillance in the future.
Resumo:
Assurance of learning is a predominant feature in both quality enhancement and assurance in higher education. Assurance of learning is a process that articulates explicit program outcomes and standards, and systematically gathers evidence to determine the extent to which performance matches expectations. Benefits accrue to the institution through the systematic assessment of whole of program goals. Data may be used for continuous improvement, program development, and to inform external accreditation and evaluation bodies. Recent developments, including the introduction of the Tertiary Education and Quality Standards Agency (TEQSA) will require universities to review the methods they use to assure learning outcomes. This project investigates two critical elements of assurance of learning: 1. the mapping of graduate attributes throughout a program; and 2. the collection of assurance of learning data. An audit was conducted with 25 of the 39 Business Schools in Australian universities to identify current methods of mapping graduate attributes and for collecting assurance of learning data across degree programs, as well as a review of the key challenges faced in these areas. Our findings indicate that external drivers like professional body accreditation (for example: Association to Advance Collegiate Schools of Business (AACSB)) and TEQSA are important motivators for assuring learning, and those who were undertaking AACSB accreditation had more robust assurance of learning systems in place. It was reassuring to see that the majority of institutions (96%) had adopted an embedding approach to assuring learning rather than opting for independent standardised testing. The main challenges that were evident were the development of sustainable processes that were not considered a burden to academic staff, and obtainment of academic buy in to the benefits of assuring learning per se rather than assurance of learning being seen as a tick box exercise. This cultural change is the real challenge in assurance of learning practice.
Resumo:
This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling. Some of the core components of data modelling are addressed. A selection of results from the first data modelling activity implemented during the second year (2010; second grade) of a current longitudinal study are reported. Data modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. Reported here are children's abilities to identify diverse and complex attributes, sort and classify data in different ways, and create and interpret models to represent their data.
Resumo:
Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.
Resumo:
Road asset managers are overwhelmed with a high volume of raw data which they need to process and utilise in supporting their decision making. This paper presents a method that processes road-crash data of a whole road network and exposes hidden value inherent in the data by deploying the clustering data mining method. The goal of the method is to partition the road network into a set of groups (classes) based on common data and characterise the class crash types to produce a crash profiles for each cluster. By comparing similar road classes with differing crash types and rates, insight can be gained into these differences that are caused by the particular characteristics of their roads. These differences can be used as evidence in knowledge development and decision support.
Resumo:
This paper argues for a renewed focus on statistical reasoning in the elementary school years, with opportunities for children to engage in data modeling. Data modeling involves investigations of meaningful phenomena, deciding what is worthy of attention, and then progressing to organizing, structuring, visualizing, and representing data. Reported here are some findings from a two-part activity (Baxter Brown’s Picnic and Planning a Picnic) implemented at the end of the second year of a current three-year longitudinal study (grade levels 1-3). Planning a Picnic was also implemented in a grade 7 class to provide an opportunity for the different age groups to share their products. Addressed here are the grade 2 children’s predictions for missing data in Baxter Brown’s Picnic, the questions posed and representations created by both grade levels in Planning a Picnic, and the metarepresentational competence displayed in the grade levels’ sharing of their products for Planning a Picnic.