932 resultados para data complexity
Resumo:
Topographic structural complexity of a reef is highly correlated to coral growth rates, coral cover and overall levels of biodiversity, and is therefore integral in determining ecological processes. Modeling these processes commonly includes measures of rugosity obtained from a wide range of different survey techniques that often fail to capture rugosity at different spatial scales. Here we show that accurate estimates of rugosity can be obtained from video footage captured using underwater video cameras (i.e., monocular video). To demonstrate the accuracy of our method, we compared the results to in situ measurements of a 2m x 20m area of forereef from Glovers Reef atoll in Belize. Sequential pairs of images were used to compute fine scale bathymetric reconstructions of the reef substrate from which precise measurements of rugosity and reef topographic structural complexity can be derived across multiple spatial scales. To achieve accurate bathymetric reconstructions from uncalibrated monocular video, the position of the camera for each image in the video sequence and the intrinsic parameters (e.g., focal length) must be computed simultaneously. We show that these parameters can be often determined when the data exhibits parallax-type motion, and that rugosity and reef complexity can be accurately computed from existing video sequences taken from any type of underwater camera from any reef habitat or location. This technique provides an infinite array of possibilities for future coral reef research by providing a cost-effective and automated method of determining structural complexity and rugosity in both new and historical video surveys of coral reefs.
Resumo:
This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.
Resumo:
Trees are capable of portraying the semi-structured data which is common in web domain. Finding similarities between trees is mandatory for several applications that deal with semi-structured data. Existing similarity methods examine a pair of trees by comparing through nodes and paths of two trees, and find the similarity between them. However, these methods provide unfavorable results for unordered tree data and result in yielding NP-hard or MAX-SNP hard complexity. In this paper, we present a novel method that encodes a tree with an optimal traversing approach first, and then, utilizes it to model the tree with its equivalent matrix representation for finding similarity between unordered trees efficiently. Empirical analysis shows that the proposed method is able to achieve high accuracy even on the large data sets.
Resumo:
Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.
Resumo:
Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.
Resumo:
The planning of IMRT treatments requires a compromise between dose conformity (complexity) and deliverability. This study investigates established and novel treatment complexity metrics for 122 IMRT beams from prostate treatment plans. The Treatment and Dose Assessor software was used to extract the necessary data from exported treatment plan files and calculate the metrics. For most of the metrics, there was strong overlap between the calculated values for plans that passed and failed their quality assurance (QA) tests. However, statistically significant variation between plans that passed and failed QA measurements was found for the established modulation index and for a novel metric describing the proportion of small apertures in each beam. The ‘small aperture score’ provided threshold values which successfully distinguished deliverable treatment plans from plans that did not pass QA, with a low false negative rate.
Resumo:
This paper firstly presents the benefits and critical challenges on the use of Bluetooth and Wi-Fi for crowd data collection and monitoring. The major challenges include antenna characteristics, environment’s complexity and scanning features. Wi-Fi and Bluetooth are compared in this paper in terms of architecture, discovery time, popularity of use and signal strength. Type of antennas used and the environment’s complexity such as trees for outdoor and partitions for indoor spaces highly affect the scanning range. The aforementioned challenges are empirically evaluated by “real” experiments using Bluetooth and Wi-Fi Scanners. The issues related to the antenna characteristics are also highlighted by experimenting with different antenna types. Novel scanning approaches including Overlapped Zones and Single Point Multi-Range detection methods will be then presented and verified by real-world tests. These novel techniques will be applied for location identification of the MAC IDs captured that can extract more information about people movement dynamics.
Resumo:
This thesis was a step forward in extracting valuable features from human's movement behaviour in terms of space utilisation based on Media-Access-Control data. This research offered a low-cost and less computational complexity approach compared to existing human's movement tracking methods. This research was successfully applied in QUT's Gardens Point campus and can be scaled to bigger environments and societies. Extractable information from human's movement by this approach can add a significant value to studying human's movement behaviour, enhancing future urban and interior design, improving crowd safety and evacuation plans.
Resumo:
This chapter describes decentralized data fusion algorithms for a team of multiple autonomous platforms. Decentralized data fusion (DDF) provides a useful basis with which to build upon for cooperative information gathering tasks for robotic teams operating in outdoor environments. Through the DDF algorithms, each platform can maintain a consistent global solution from which decisions may then be made. Comparisons will be made between the implementation of DDF using two probabilistic representations. The first, Gaussian estimates and the second Gaussian mixtures are compared using a common data set. The overall system design is detailed, providing insight into the overall complexity of implementing a robust DDF system for use in information gathering tasks in outdoor UAV applications.
Resumo:
Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1–28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens.
Resumo:
In this paper we propose and study low complexity algorithms for on-line estimation of hidden Markov model (HMM) parameters. The estimates approach the true model parameters as the measurement noise approaches zero, but otherwise give improved estimates, albeit with bias. On a nite data set in the high noise case, the bias may not be signi cantly more severe than for a higher complexity asymptotically optimal scheme. Our algorithms require O(N3) calculations per time instant, where N is the number of states. Previous algorithms based on earlier hidden Markov model signal processing methods, including the expectation-maximumisation (EM) algorithm require O(N4) calculations per time instant.
Resumo:
This paper presents a single pass algorithm for mining discriminative Itemsets in data streams using a novel data structure and the tilted-time window model. Discriminative Itemsets are defined as Itemsets that are frequent in one data stream and their frequency in that stream is much higher than the rest of the streams in the dataset. In order to deal with the data structure size, we propose a pruning process that results in the compact tree structure containing discriminative Itemsets. Empirical analysis shows the sound time and space complexity of the proposed method.
Resumo:
Smart Card Automated Fare Collection (AFC) data has been extensively exploited to understand passenger behavior, passenger segment, trip purpose and improve transit planning through spatial travel pattern analysis. The literature has been evolving from simple to more sophisticated methods such as from aggregated to individual travel pattern analysis, and from stop-to-stop to flexible stop aggregation. However, the issue of high computing complexity has limited these methods in practical applications. This paper proposes a new algorithm named Weighted Stop Density Based Scanning Algorithm with Noise (WS-DBSCAN) based on the classical Density Based Scanning Algorithm with Noise (DBSCAN) algorithm to detect and update the daily changes in travel pattern. WS-DBSCAN converts the classical quadratic computation complexity DBSCAN to a problem of sub-quadratic complexity. The numerical experiment using the real AFC data in South East Queensland, Australia shows that the algorithm costs only 0.45% in computation time compared to the classical DBSCAN, but provides the same clustering results.
Resumo:
Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.
Resumo:
In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.