934 resultados para random network coding
Resumo:
Active learning approaches reduce the annotation cost required by traditional supervised approaches to reach the same effectiveness by actively selecting informative instances during the learning phase. However, effectiveness and robustness of the learnt models are influenced by a number of factors. In this paper we investigate the factors that affect the effectiveness, more specifically in terms of stability and robustness, of active learning models built using conditional random fields (CRFs) for information extraction applications. Stability, defined as a small variation of performance when small variation of the training data or a small variation of the parameters occur, is a major issue for machine learning models, but even more so in the active learning framework which aims to minimise the amount of training data required. The factors we investigate are a) the choice of incremental vs. standard active learning, b) the feature set used as a representation of the text (i.e., morphological features, syntactic features, or semantic features) and c) Gaussian prior variance as one of the important CRFs parameters. Our empirical findings show that incremental learning and the Gaussian prior variance lead to more stable and robust models across iterations. Our study also demonstrates that orthographical, morphological and contextual features as a group of basic features play an important role in learning effective models across all iterations.
Resumo:
Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.
Resumo:
A new technique is presented for automatically identifying the phase connection of domestic customers. Voltage information from a reference three phase house is correlated with voltage information from other customer electricity meters on the same network to determine the highest probability phase connection. The techniques are purely based upon a time series of electrical voltage measurements taken by the household smart meters and no additional equipment is required. The method is demonstrated using real smart meter datasets to correctly identify the phase connections of 75 consumers on a low voltage distribution feeder.
Resumo:
Recently Convolutional Neural Networks (CNNs) have been shown to achieve state-of-the-art performance on various classification tasks. In this paper, we present for the first time a place recognition technique based on CNN models, by combining the powerful features learnt by CNNs with a spatial and sequential filter. Applying the system to a 70 km benchmark place recognition dataset we achieve a 75% increase in recall at 100% precision, significantly outperforming all previous state of the art techniques. We also conduct a comprehensive performance comparison of the utility of features from all 21 layers for place recognition, both for the benchmark dataset and for a second dataset with more significant viewpoint changes.
Resumo:
Introduction Risk factor analyses for nosocomial infections (NIs) are complex. First, due to competing events for NI, the association between risk factors of NI as measured using hazard rates may not coincide with the association using cumulative probability (risk). Second, patients from the same intensive care unit (ICU) who share the same environmental exposure are likely to be more similar with regard to risk factors predisposing to a NI than patients from different ICUs. We aimed to develop an analytical approach to account for both features and to use it to evaluate associations between patient- and ICU-level characteristics with both rates of NI and competing risks and with the cumulative probability of infection. Methods We considered a multicenter database of 159 intensive care units containing 109,216 admissions (813,739 admission-days) from the Spanish HELICS-ENVIN ICU network. We analyzed the data using two models: an etiologic model (rate based) and a predictive model (risk based). In both models, random effects (shared frailties) were introduced to assess heterogeneity. Death and discharge without NI are treated as competing events for NI. Results There was a large heterogeneity across ICUs in NI hazard rates, which remained after accounting for multilevel risk factors, meaning that there are remaining unobserved ICU-specific factors that influence NI occurrence. Heterogeneity across ICUs in terms of cumulative probability of NI was even more pronounced. Several risk factors had markedly different associations in the rate-based and risk-based models. For some, the associations differed in magnitude. For example, high Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were associated with modest increases in the rate of nosocomial bacteremia, but large increases in the risk. Others differed in sign, for example respiratory vs cardiovascular diagnostic categories were associated with a reduced rate of nosocomial bacteremia, but an increased risk. Conclusions A combination of competing risks and multilevel models is required to understand direct and indirect risk factors for NI and distinguish patient-level from ICU-level factors.
Resumo:
Identifying appropriate decision criteria and making optimal decisions in a structured way is a complex process. This paper presents an approach for doing this in the form of a hybrid Quality Function Deployment (QFD) and Cybernetic Analytic Network Process (CANP) model for project manager selection. This involves the use of QFD to translate the owner's project management expectations into selection criteria and the CANP to weight the expectations and selection criteria. The supermatrix approach then prioritises the candidates with respect to the overall decision-making goal. A case study is used to demonstrate the use of the model in selecting a renovation project manager. This involves the development of 18 selection criteria in response to the owner's three main expectations of time, cost and quality.
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
We describe an investigation into how Massey University's Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University's pollen reference collection (2890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set. In addition to the Classifynder's native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples. © 2013 AIP Publishing LLC.
Resumo:
The rapid pace of social media means that our understanding of the way in which it facilitates the learning process continues to lag. The findings of a longitudinal study of an executive MBA cohort over the period of eight months in their use of the social media application is presented. Over time the ownership and use of the Yammer site shifted to become student driven and facilitated. The motivations behind the site’s use, perceived advantages and disadvantages and changes in usage patterns are documented. The case provides a useful insight into the way in which students used this technology to facilitate their learning goals and how patterns of behaviour changed in response to the changing needs of the cohort.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an R2 goodness of fit of 0.9994 and 0.9982 respectively over a 10 h test period. The utility of the framework is demonstrated on a number of usage scenarios including causal analysis and ‘what-if’ analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
This paper suggests a supervisory control for storage units to provide load leveling in distribution networks. This approach coordinates storage units to charge during high generation and discharge during peak load times, while utilized to improve the network voltage profile indirectly. The aim of this control strategy is to establish power sharing on a pro rata basis for storage units. As a case study, a practical distribution network with 30 buses is simulated and the results are provided.
Resumo:
Large-scale integration of non-inertial generators such as wind farms will create frequency stability issues due to reduced system inertia. Inertia based frequency stability study is important to predict the performance of power system with increased level of renewables. This paper focuses on the impact large-scale wind penetration on frequency stability of the Australian Power Network. MATLAB simulink is used to develop a frequency based dynamic model utilizing the network data from a simplified 14-generator Australian power system. The loss of generation is modeled as the active power disturbance and minimum inertia required to maintain the frequency stability is determined for five-area power system.
Resumo:
With the overwhelming increase in the amount of data on the web and data bases, many text mining techniques have been proposed for mining useful patterns in text documents. Extracting closed sequential patterns using the Pattern Taxonomy Model (PTM) is one of the pruning methods to remove noisy, inconsistent, and redundant patterns. However, PTM model treats each extracted pattern as whole without considering included terms, which could affect the quality of extracted patterns. This paper propose an innovative and effective method that extends the random set to accurately weigh patterns based on their distribution in the documents and their terms distribution in patterns. Then, the proposed approach will find the specific closed sequential patterns (SCSP) based on the new calculated weight. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms other state-of-the-art methods in different popular measures.
Resumo:
The Hong Kong construction industry is currently facing ageing problem and labour shortage. There are opportunities for employing ethnic minority construction workers to join this hazardous industry. These ethnic minority workers are prone to accidents due to communication barriers. Safety communication is playing an important role for avoiding the accidents on construction sites. However, the ethnic minority workers are not very fluent in the local language and facing safety communication problems while working with local workers. Social network analysis (SNA), being an effective tool to identify the safety communication flow on the construction site, is used to attain the measures of safety communication like centrality, density and betweenness within the ethnic minorities and local workers, and to generate sociograms that visually represent communication pattern within the effective and ineffective safety networks. The aim of this paper is to present the application of SNA for improving the safety communication of ethnic minorities in the construction industry of Hong Kong. The paper provides the theoretical background of SNA approaches for the data collection and analysis using the software UCINET and NetDraw, to determine the predominant safety communication network structure and pattern of ethnic minorities on site.
Resumo:
This paper presents two key findings from a longitudinal study examining the dynamics of social networks during organisational change. One, the degree to which users seek new sources of information while adapting to the change. Two, the degree to which social networks display structural resilience when undergoing significant structural and technological change. Users reported an increase in advice ties post-implementation, however a proportionally higher increase in ties within their work group compared to the wider network was identified. The results also supported the supposition that while IT driven change may initially disrupt social networks some networks possess a high degree of resilience, with key players reasserting their original positions of influence following the initial phase of change related disruption.