461 resultados para Delay Tolerant Network
Resumo:
Supervisory Control and Data Acquisition systems (SCADA) are widely used to control critical infrastructure automatically. Capturing and analyzing packet-level traffic flowing through such a network is an essential requirement for problems such as legacy network mapping and fault detection. Within the framework of captured network traffic, we present a simple modeling technique, which supports the mapping of the SCADA network topology via traffic monitoring. By characterizing atomic network components in terms of their input-output topology and the relationship between their data traffic logs, we show that these modeling primitives have good compositional behaviour, which allows complex networks to be modeled. Finally, the predictions generated by our model are found to be in good agreement with experimentally obtained traffic.
Resumo:
Background: Seizures and interictal spikes in mesial temporal lobe epilepsy (MTLE) affect a network of brain regions rather than a single epileptic focus. Simultaneous electroencephalography and functional magnetic resonance imaging (EEG-fMRI) studies have demonstrated a functional network in which hemodynamic changes are time-locked to spikes. However, whether this reflects the propagation of neuronal activity from a focus, or conversely the activation of a network linked to spike generation remains unknown. The functional connectivity (FC) changes prior to spikes may provide information about the connectivity changes that lead to the generation of spikes. We used EEG-fMRI to investigate FC changes immediately prior to the appearance of interictal spikes on EEG in patients with MTLE. Methods/principal findings: Fifteen patients with MTLE underwent continuous EEG-fMRI during rest. Spikes were identified on EEG and three 10 s epochs were defined relative to spike onset: spike (0–10 s), pre-spike (−10 to 0 s), and rest (−20 to −10 s, with no previous spikes in the preceding 45s). Significant spike-related activation in the hippocampus ipsilateral to the seizure focus was found compared to the pre-spike and rest epochs. The peak voxel within the hippocampus ipsilateral to the seizure focus was used as a seed region for FC analysis in the three conditions. A significant change in FC patterns was observed before the appearance of electrographic spikes. Specifically, there was significant loss of coherence between both hippocampi during the pre-spike period compared to spike and rest states. Conclusion/significance: In keeping with previous findings of abnormal inter-hemispheric hippocampal connectivity in MTLE, our findings specifically link reduced connectivity to the period immediately before spikes. This brief decoupling is consistent with a deficit in mutual (inter-hemispheric) hippocampal inhibition that may predispose to spike generation.
Resumo:
Network Real-Time Kinematic (NRTK) is a technology that can provide centimeter-level accuracy positioning services in real time, and it is enabled by a network of Continuously Operating Reference Stations (CORS). The location-oriented CORS placement problem is an important problem in the design of a NRTK as it will directly affect not only the installation and operational cost of the NRTK, but also the quality of positioning services provided by the NRTK. This paper presents a Memetic Algorithm (MA) for the location-oriented CORS placement problem, which hybridizes the powerful explorative search capacity of a genetic algorithm and the efficient and effective exploitative search capacity of a local optimization. Experimental results have shown that the MA has better performance than existing approaches. In this paper we also conduct an empirical study about the scalability of the MA, effectiveness of the hybridization technique and selection of crossover operator in the MA.
Resumo:
The network reconfiguration is an important stage of restoring a power system after a complete blackout or a local outage. Reasonable planning of the network reconfiguration procedure is essential for rapidly restoring the power system concerned. An approach for evaluating the importance of a line is first proposed based on the line contraction concept. Then, the interpretative structural modeling (ISM) is employed to analyze the relationship among the factors having impacts on the network reconfiguration. The security and speediness of restoring generating units are considered with priority, and a method is next proposed to select the generating unit to be restored by maximizing the restoration benefit with both the generation capacity of the restored generating unit and the importance of the line in the restoration path considered. Both the start-up sequence of generating units and the related restoration paths are optimized together in the proposed method, and in this way the shortcomings of separately solving these two issues in the existing methods are avoided. Finally, the New England 10-unit 39-bus power system and the Guangdong power system in South China are employed to demonstrate the basic features of the proposed method.
Resumo:
This thesis presents an association rule mining approach, association hierarchy mining (AHM). Different to the traditional two-step bottom-up rule mining, AHM adopts one-step top-down rule mining strategy to improve the efficiency and effectiveness of mining association rules from datasets. The thesis also presents a novel approach to evaluate the quality of knowledge discovered by AHM, which focuses on evaluating information difference between the discovered knowledge and the original datasets. Experiments performed on the real application, characterizing network traffic behaviour, have shown that AHM achieves encouraging performance.
Resumo:
This article analyses co-movements in a wide group of commodity prices during the time period 1992–2010. Our methodological approach is based on the correlation matrix and the networks inside. Through this approach we are able to summarize global interaction and interdependence, capturing the existing heterogeneity in the degrees of synchronization between commodity prices. Our results produce two main findings: (a) we do not observe a persistent increase in the degree of co-movement of the commodity prices in our time sample, however from mid-2008 to the end of 2009 co-movements almost doubled when compared with the average correlation; (b) we observe three groups of commodities which have exhibited similar price dynamics (metals, oil and grains, and oilseeds) and which have increased their degree of co-movement during the sampled period.
Resumo:
In this paper, we present a dynamic model to identify influential users of micro-blogging services. Micro-blogging services, such as Twitter, allow their users (twitterers) to publish tweets and choose to follow other users to receive tweets. Previous work on user influence on Twitter, concerns more on following link structure and the contents user published, seldom emphasizes the importance of interactions among users. We argue that, by emphasizing on user actions in micro-blogging platform, user influence could be measured more accurately. Since micro-blogging is a powerful social media and communication platform, identifying influential users according to user interactions has more practical meanings, e.g., advertisers may concern how many actions – buying, in this scenario – the influential users could initiate rather than how many advertisements they spread. By introducing the idea of PageRank algorithm, innovatively, we propose our model using action-based network which could capture the ability of influential users when they interacting with micro-blogging platform. Taking the evolving prosperity of micro-blogging into consideration, we extend our actionbaseduser influence model into a dynamic one, which could distinguish influential users in different time periods. Simulation results demonstrate that our models could support and give reasonable explanations for the scenarios that we considered.
Resumo:
As a key element in their response to new media forcing transformations in mass media and media use, newspapers have deployed various strategies to not only establish online and mobile products, and develop healthy business plans, but to set out to be dominant portals. Their response to change was the subject of an early investigation by one of the present authors (Keshvani 2000). That was part of a set of short studies inquiring into what impact new software applications and digital convergence might have on journalism practice (Tickle and Keshvani 2000), and also looking for demonstrations of the way that innovations, technologies and protocols then under development might produce a “wireless, streamlined electronic news production process (Tickle and Keshvani 2001).” The newspaper study compared the online products of The Age in Melbourne and the Straits Times in Singapore. It provided an audit of the Singapore and Australia Information and Communications Technology (ICT) climate concentrating on the state of development of carrier networks, as a determining factor in the potential strength of the two services with their respective markets. In the outcome, contrary to initial expectations, the early cable roll-out and extensive ‘wiring’ of the city in Singapore had not produced a level of uptake of Internet services as strong as that achieved in Melbourne by more ad hoc and varied strategies. By interpretation, while news websites and online content were at an early stage of development everywhere, and much the same as one another, no determining structural imbalance existed to separate these leading media participants in Australia and South-east Asia. The present research revisits that situation, by again studying the online editions of the two large newspapers in the original study, and one other, The Courier Mail, (recognising the diversification of types of product in this field, by including it as a representative of Newscorp, now a major participant). The inquiry works through the principle of comparison. It is an exercise in qualitative, empirical research that establishes a comparison between the situation in 2000 as described in the earlier work, and the situation in 2014, after a decade of intense development in digital technology affecting the media industries. It is in that sense a follow-up study on the earlier work, although this time giving emphasis to content and style of the actual products as experienced by their users. It compares the online and print editions of each of these three newspapers; then the three mastheads as print and online entities, among themselves; and finally it compares one against the other two, as representing a South-east Asian model and Australian models. This exercise is accompanied by a review of literature on the developments in ICT affecting media production and media organisations, to establish the changed context. The new study of the online editions is conducted as a systematic appraisal of the first level, or principal screens, of the three publications, over the course of six days (10-15.2.14 inclusive). For this, categories for analysis were made, through conducting a preliminary examination of the products over three days in the week before. That process identified significant elements of media production, such as: variegated sourcing of materials; randomness in the presentation of items; differential production values among media platforms considered, whether text, video or stills images; the occasional repurposing and repackaging of top news stories of the day and the presence of standard news values – once again drawn out of the trial ‘bundle’ of journalistic items. Reduced in this way the online artefacts become comparable with the companion print editions from the same days. The categories devised and then used in the appraisal of the online products have been adapted to print, to give the closest match of sets of variables. This device, to study the two sets of publications on like standards -- essentially production values and news values—has enabled the comparisons to be made. This comparing of the online and print editions of each of the three publications was set up as up the first step in the investigation. In recognition of the nature of the artefacts, as ones that carry very diverse information by subject and level of depth, and involve heavy creative investment in the formulation and presentation of the information; the assessment also includes an open section for interpreting and commenting on main points of comparison. This takes the form of a field for text, for the insertion of notes, in the table employed for summarising the features of each product, for each day. When the sets of comparisons as outlined above are noted, the process then becomes interpretative, guided by the notion of change. In the context of changing media technology and publication processes, what substantive alterations have taken place, in the overall effort of news organisations in the print and online fields since 2001; and in their print and online products separately? Have they diverged or continued along similar lines? The remaining task is to begin to make inferences from that. Will the examination of findings enforce the proposition that a review of the earlier study, and a forensic review of new models, does provide evidence of the character and content of change --especially change in journalistic products and practice? Will it permit an authoritative description on of the essentials of such change in products and practice? Will it permit generalisation, and provide a reliable base for discussion of the implications of change, and future prospects? Preliminary observations suggest a more dynamic and diversified product has been developed in Singapore, well themed, obviously sustained by public commitment and habituation to diversified online and mobile media services. The Australian products suggest a concentrated corporate and journalistic effort and deployment of resources, with a strong market focus, but less settled and ordered, and showing signs of limitations imposed by the delay in establishing a uniform, large broadband network. The scope of the study is limited. It is intended to test, and take advantage of the original study as evidentiary material from the early days of newspaper companies’ experimentation with online formats. Both are small studies. The key opportunity for discovery lies in the ‘time capsule’ factor; the availability of well-gathered and processed information on major newspaper company production, at the threshold of a transformational decade of change in their industry. The comparison stands to identify key changes. It should also be useful as a reference for further inquiries of the same kind that might be made, and for monitoring of the situation in regard to newspaper portals on line, into the future.
Resumo:
Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.
Resumo:
A new technique is presented for automatically identifying the phase connection of domestic customers. Voltage information from a reference three phase house is correlated with voltage information from other customer electricity meters on the same network to determine the highest probability phase connection. The techniques are purely based upon a time series of electrical voltage measurements taken by the household smart meters and no additional equipment is required. The method is demonstrated using real smart meter datasets to correctly identify the phase connections of 75 consumers on a low voltage distribution feeder.
Resumo:
Recently Convolutional Neural Networks (CNNs) have been shown to achieve state-of-the-art performance on various classification tasks. In this paper, we present for the first time a place recognition technique based on CNN models, by combining the powerful features learnt by CNNs with a spatial and sequential filter. Applying the system to a 70 km benchmark place recognition dataset we achieve a 75% increase in recall at 100% precision, significantly outperforming all previous state of the art techniques. We also conduct a comprehensive performance comparison of the utility of features from all 21 layers for place recognition, both for the benchmark dataset and for a second dataset with more significant viewpoint changes.
Resumo:
Identifying appropriate decision criteria and making optimal decisions in a structured way is a complex process. This paper presents an approach for doing this in the form of a hybrid Quality Function Deployment (QFD) and Cybernetic Analytic Network Process (CANP) model for project manager selection. This involves the use of QFD to translate the owner's project management expectations into selection criteria and the CANP to weight the expectations and selection criteria. The supermatrix approach then prioritises the candidates with respect to the overall decision-making goal. A case study is used to demonstrate the use of the model in selecting a renovation project manager. This involves the development of 18 selection criteria in response to the owner's three main expectations of time, cost and quality.
Resumo:
The rapid pace of social media means that our understanding of the way in which it facilitates the learning process continues to lag. The findings of a longitudinal study of an executive MBA cohort over the period of eight months in their use of the social media application is presented. Over time the ownership and use of the Yammer site shifted to become student driven and facilitated. The motivations behind the site’s use, perceived advantages and disadvantages and changes in usage patterns are documented. The case provides a useful insight into the way in which students used this technology to facilitate their learning goals and how patterns of behaviour changed in response to the changing needs of the cohort.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an R2 goodness of fit of 0.9994 and 0.9982 respectively over a 10 h test period. The utility of the framework is demonstrated on a number of usage scenarios including causal analysis and ‘what-if’ analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
This paper suggests a supervisory control for storage units to provide load leveling in distribution networks. This approach coordinates storage units to charge during high generation and discharge during peak load times, while utilized to improve the network voltage profile indirectly. The aim of this control strategy is to establish power sharing on a pro rata basis for storage units. As a case study, a practical distribution network with 30 buses is simulated and the results are provided.