896 resultados para Benchmark Criteria
Resumo:
Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.
Resumo:
Design teams are confronted with the quandary of choosing apposite building control systems to suit the needs of particular intelligent building projects, due to the availability of innumerable ‘intelligent’ building products and a dearth of inclusive evaluation tools. This paper is organised to develop a model for facilitating the selection evaluation for intelligent HVAC control systems for commercial intelligent buildings. To achieve these objectives, systematic research activities have been conducted to first develop, test and refine the general conceptual model using consecutive surveys; then, to convert the developed conceptual framework into a practical model; and, finally, to evaluate the effectiveness of the model by means of expert validation. The results of the surveys are that ‘total energy use’ is perceived as the top selection criterion, followed by the‘system reliability and stability’, ‘operating and maintenance costs’, and ‘control of indoor humidity and temperature’. This research not only presents a systematic and structured approach to evaluate candidate intelligent HVAC control system against the critical selection criteria (CSC), but it also suggests a benchmark for the selection of one control system candidate against another.
Resumo:
Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.
Resumo:
The multi-criteria decision making methods, Preference METHods for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA), and the two-way Positive Matrix Factorization (PMF) receptor model were applied to airborne fine particle compositional data collected at three sites in Hong Kong during two monitoring campaigns held from November 2000 to October 2001 and November 2004 to October 2005. PROMETHEE/GAIA indicated that the three sites were worse during the later monitoring campaign, and that the order of the air quality at the sites during each campaign was: rural site > urban site > roadside site. The PMF analysis on the other hand, identified 6 common sources at all of the sites (diesel vehicle, fresh sea salt, secondary sulphate, soil, aged sea salt and oil combustion) which accounted for approximately 68.8 ± 8.7% of the fine particle mass at the sites. In addition, road dust, gasoline vehicle, biomass burning, secondary nitrate, and metal processing were identified at some of the sites. Secondary sulphate was found to be the highest contributor to the fine particle mass at the rural and urban sites with vehicle emission as a high contributor to the roadside site. The PMF results are broadly similar to those obtained in a previous analysis by PCA/APCS. However, the PMF analysis resolved more factors at each site than the PCA/APCS. In addition, the study demonstrated that combined results from multi-criteria decision making analysis and receptor modelling can provide more detailed information that can be used to formulate the scientific basis for mitigating air pollution in the region.
Resumo:
In Australia rural research and development corporations and companies expended over $AUS500 million on agricultural research and development. A substantial proportion of this is invested in R&D in the beef industry. The Australian beef industry exports almost $AUS5billionof product annually and invest heavily in new product development to improve the beef quality and improve production efficiency. Review points are critical for effective new product development, yet many research and development bodies, particularly publicly funded ones, appear to ignore the importance of assessing products prior to their release. Significant sums of money are invested in developing technological innovations that have low levels and rates of adoption. The adoption rates could be improved if the developers were more focused on technology uptake and less focused on proving their technologies can be applied in practice. Several approaches have been put forward in an effort to improve rates of adoption into operational settings. This paper presents a study of key technological innovations in the Australian beef industry to assess the use of multiple criteria in evaluating the potential uptake of new technologies. Findings indicate that using multiple criteria to evaluate innovations before commercializing a technology enables researchers to better understand the issues that may inhibit adoption.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
Purpose. To investigate evidence-based visual field size criteria for referral of low-vision (LV) patients for mobility rehabilitation. Methods. One hundred and nine participants with LV and 41 age-matched participants with normal sight (NS) were recruited. The LV group was heterogeneous with diverse causes of visual impairment. We measured binocular kinetic visual fields with the Humphrey Field Analyzer and mobility performance on an obstacle-rich, indoor course. Mobility was assessed as percent preferred walking speed (PPWS) and number of obstacle-contact errors. The weighted kappa coefficient of association (κr) was used to discriminate LV participants with both unsafe and inefficient mobility from those with adequate mobility on the basis of their visual field size for the full sample and for subgroups according to type of visual field loss and whether or not the participants had previously received orientation and mobility training. Results. LV participants with both PPWS <38% and errors >6 on our course were classified as having inadequate (inefficient and unsafe) mobility compared with NS participants. Mobility appeared to be first compromised when the visual field was less than about 1.2 steradians (sr; solid angle of a circular visual field of about 70° diameter). Visual fields <0.23 and 0.63 sr (31 to 52° diameter) discriminated patients with at-risk mobility for the full sample and across the two subgroups. A visual field of 0.05 sr (15° diameter) discriminated those with critical mobility. Conclusions. Our study suggests that: practitioners should be alert to potential mobility difficulties when the visual field is less than about 1.2 sr (70° diameter); assessment for mobility rehabilitation may be warranted when the visual field is constricted to about 0.23 to 0.63 sr (31 to 52° diameter) depending on the nature of their visual field loss and previous history (at risk); and mobility rehabilitation should be conducted before the visual field is constricted to 0.05 sr (15° diameter; critical).
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
This report presents the findings of an exploratory study into the perceptions held by students regarding the use of criterion-referenced assessment in an undergraduate differential equations class. Students in the class were largely unaware of the concept of criterion referencing and of the various interpretations that this concept has among mathematics educators. Our primary goal was to investigate whether explicitly presenting assessment criteria to students was useful to them and guided them in responding to assessment tasks. Quantitative data and qualitative feedback from students indicates that while students found the criteria easy to understand and useful in informing them as to how they would be graded, the manner in which they actually approached the assessment activity was not altered as a result of the use of explicitly communicated grading criteria.
Resumo:
This paper uses dynamic computer simulation techniques to develop and apply a multi-criteria procedure using non-destructive vibration-based parameters for damage assessment in truss bridges. In addition to changes in natural frequencies, this procedure incorporates two parameters, namely the modal flexibility and the modal strain energy. Using the numerically simulated modal data obtained through finite element analysis of the healthy and damaged bridge models, algorithms based on modal flexibility and modal strain energy changes before and after damage are obtained and used as the indices for the assessment of structural health state. The application of the two proposed parameters to truss-type structures is limited in the literature. The proposed multi-criteria based damage assessment procedure is therefore developed and applied to truss bridges. The application of the approach is demonstrated through numerical simulation studies of a single-span simply supported truss bridge with eight damage scenarios corresponding to different types of deck and truss damage. Results show that the proposed multi-criteria method is effective in damage assessment in this type of bridge superstructure.