886 resultados para Computer Networks
Resumo:
Popular wireless networks, such as IEEE 802.11/15/16, are not designed for real-time applications. Thus, supporting real-time quality of service (QoS) in wireless real-time control is challenging. This paper adopts the widely used IEEE 802.11, with the focus on its distributed coordination function (DCF), for soft-real-time control systems. The concept of the critical real-time traffic condition is introduced to characterize the marginal satisfaction of real-time requirements. Then, mathematical models are developed to describe the dynamics of DCF based real-time control networks with periodic traffic, a unique feature of control systems. Performance indices such as throughput and packet delay are evaluated using the developed models, particularly under the critical real-time traffic condition. Finally, the proposed modelling is applied to traffic rate control for cross-layer networked control system design.
Resumo:
The final shape of the "Internet of Things" ubiquitous computing promises relies on a cybernetic system of inputs (in the form of sensory information), computation or decision making (based on the prefiguration of rules, contexts, and user-generated or defined metadata), and outputs (associated action from ubiquitous computing devices). My interest in this paper lies in the computational intelligences that suture these positions together, and how positioning these intelligences as autonomous agents extends the dialogue between human-users and ubiquitous computing technology. Drawing specifically on the scenarios surrounding the employment of ubiquitous computing within aged care, I argue that agency is something that cannot be traded without serious consideration of the associated ethics.
Resumo:
Purpose – The work presented in this paper aims to provide an approach to classifying web logs by personal properties of users. Design/methodology/approach – The authors describe an iterative system that begins with a small set of manually labeled terms, which are used to label queries from the log. A set of background knowledge related to these labeled queries is acquired by combining web search results on these queries. This background set is used to obtain many terms that are related to the classification task. The system then ranks each of the related terms, choosing those that most fit the personal properties of the users. These terms are then used to begin the next iteration. Findings – The authors identify the difficulties of classifying web logs, by approaching this problem from a machine learning perspective. By applying the approach developed, the authors are able to show that many queries in a large query log can be classified. Research limitations/implications – Testing results in this type of classification work is difficult, as the true personal properties of web users are unknown. Evaluation of the classification results in terms of the comparison of classified queries to well known age-related sites is a direction that is currently being exploring. Practical implications – This research is background work that can be incorporated in search engines or other web-based applications, to help marketing companies and advertisers. Originality/value – This research enhances the current state of knowledge in short-text classification and query log learning. Classification schemes, Computer networks, Information retrieval, Man-machine systems, User interfaces
Resumo:
Advanced substation applications, such as synchrophasors and IEC 61850-9-2 sampled value process buses, depend upon highly accurate synchronizing signals for correct operation. The IEEE 1588 Precision Timing Protocol (PTP) is the recommended means of providing precise timing for future substations. This paper presents a quantitative assessment of PTP reliability using Fault Tree Analysis. Two network topologies are proposed that use grandmaster clocks with dual network connections and take advantage of the Best Master Clock Algorithm (BMCA) from IEEE 1588. The cross-connected grandmaster topology doubles reliability, and the addition of a shared third grandmaster gives a nine-fold improvement over duplicated grandmasters. The performance of BMCA mediated handover of the grandmaster role during contingencies in the timing system was evaluated experimentally. The 1 µs performance requirement of sampled values and synchrophasors are met, even during network or GPS antenna outages. Slave clocks are shown to synchronize to the backup grandmaster in response to degraded performance or loss of the main grandmaster. Slave disturbances are less than 350 ns provided the grandmaster reference clocks are not offset from one another. A clear understanding of PTP reliability and the factors that affect availability will encourage the adoption of PTP for substation time synchronization.
Resumo:
Threats against computer networks evolve very fast and require more and more complex measures. We argue that teams respectively groups with a common purpose for intrusion detection and prevention improve the measures against rapid propagating attacks similar to the concept of teams solving complex tasks known from field of work sociology. Collaboration in this sense is not easy task especially for heterarchical environments. We propose CIMD (collaborative intrusion and malware detection) as a security overlay framework to enable cooperative intrusion detection approaches. Objectives and associated interests are used to create detection groups for exchange of security-related data. In this work, we contribute a tree-oriented data model for device representation in the scope of security. We introduce an algorithm for the formation of detection groups, show realization strategies for the system and conduct vulnerability analysis. We evaluate the benefit of CIMD by simulation and probabilistic analysis.
Resumo:
The emergence of global computer networks and the ubiquitous availability of advanced information and communication technology (ICT) since the mid Nineties has given rise to the hope that the traditional disadvantages faced by regional economies and regional communities could be elevated easily and swiftly. Yet, the experience of both community informatics and community development researchers and practitioners tells a different tale. Although the potential of ICT is in fact realised in some situations and locations and does provide means to ensure sustainability in some regional communities, elsewhere it has not been taken up or has not been able to elicit change for the promised better. Too many communities are still faced by a centralised structure in the context of commerce, service provision or governance and by various degrees of digital divides between connected and disconnected, between media literate and illiterate, between young and old, and between urban and rural. Many attempts to close or bridge the digital divide have been reported with various degrees of success (cf. Menou, 2001; Servon, 2002). Most of these accounts echo a common voice in that they report similar principles of action, and they reflect – in most cases unconsciously – practices of sociocultural animation. This article seeks to shed light onto the concept of sociocultural animation which is already commonplace in various forms in the arts, in education and professional development, youth work, sports, town planning, careers services, entrepreneurship and tourism. It starts by exploring the origins of sociocultural animation and draws parallels to the current state of research and practice. It unpacks the foundation of sociocultural animation and briefly describes underlying principles and how they can be applied in the context of community informatics and developing regional communities with ICT. Finally, further areas of investigation are being proposed.
Resumo:
Understanding network traffic behaviour is crucial for managing and securing computer networks. One important technique is to mine frequent patterns or association rules from analysed traffic data. On the one hand, association rule mining usually generates a huge number of patterns and rules, many of them meaningless or user-unwanted; on the other hand, association rule mining can miss some necessary knowledge if it does not consider the hierarchy relationships in the network traffic data. Aiming to address such issues, this paper proposes a hybrid association rule mining method for characterizing network traffic behaviour. Rather than frequent patterns, the proposed method generates non-similar closed frequent patterns from network traffic data, which can significantly reduce the number of patterns. This method also proposes to derive new attributes from the original data to discover novel knowledge according to hierarchy relationships in network traffic data and user interests. Experiments performed on real network traffic data show that the proposed method is promising and can be used in real applications. Copyright2013 John Wiley & Sons, Ltd.
Resumo:
A Remote Sensing Core Curriculum (RSCC) development project is currently underway. This project is being conducted under the auspices of the National Center for Geographic Information and Analysis (NCGIA). RSCC is an outgrowth of the NCGIA GIS Core Curriculum project. It grew out of discussions begun at NCGIA, Initiative 12 (I-12): 'Integration of Remote Sensing and Geographic Information Systems'. This curriculum development project focuses on providing professors, teachers and instructors in undergraduate and graduate institutions with course materials from experts in specific subject matter for areas use in the class room.
Resumo:
In this chapter, we discuss four related areas of cryptology, namely, authentication, hashing, message authentication codes (MACs), and digital signatures. These topics represent active and growing research topics in cryptology. Space limitations allow us to concentrate only on the essential aspects of each topic. The bibliography is intended to supplement our survey. We have selected those items which providean overview of the current state of knowledge in the above areas. Authentication deals with the problem of providing assurance to a receiver that a communicated message originates from a particular transmitter, and that the received message has the same content as the transmitted message. A typical authentication scenario occurs in computer networks, where the identity of two communicating entities is established by means of authentication. Hashing is concerned with the problem of providing a relatively short digest–fingerprint of a much longer message or electronic document. A hashing function must satisfy (at least) the critical requirement that the fingerprints of two distinct messages are distinct. Hashing functions have numerous applications in cryptology. They are often used as primitives to construct other cryptographic functions. MACs are symmetric key primitives that provide message integrity against active spoofing by appending a cryptographic checksum to a message that is verifiable only by the intended recipient of the message. Message authentication is one of the most important ways of ensuring the integrity of information that is transferred by electronic means. Digital signatures provide electronic equivalents of handwritten signatures. They preserve the essential features of handwritten signatures and can be used to sign electronic documents. Digital signatures can potentially be used in legal contexts.
Resumo:
In the developing digital economy, the notion of traditional attack on enterprises of national significance or interest has transcended into different modes of electronic attack, surpassing accepted traditional forms of physical attack upon a target. The terrorist attacks that took place in the United States on September 11, 2001 demonstrated the physical devastation that could occur if any nation were the target of a large-scale terrorist attack. Therefore, there is a need to protect criticalnational infrastructure and critical information infrastructure. In particular,this protection is crucial for the proper functioning of a modern society and for a government to fulfill one of its most important prerogatives – namely, the protection of its people. Computer networks have many benefits that governments, corporations, and individuals alike take advantage of in order to promote and perform their duties and roles. Today, there is almost complete dependence on private sector telecommunication infrastructures and the associated computer hardware and software systems.1 These infrastructures and systems even support government and defense activity.2 This Article discusses possible attacks on critical information infrastructures and the government reactions to these attacks.
Resumo:
In a traditional anti-jamming system a transmitter who wants to send a signal to a single receiver spreads the signal power over a wide frequency spectrum with the aim of stopping a jammer from blocking the transmission. In this paper, we consider the case that there are multiple receivers and the transmitter wants to broadcast a message to all receivers such that colluding groups of receivers cannot jam the reception of any other receiver. We propose efficient coding methods that achieve this goal and link this problem to well-known problems in combinatorics. We also link a generalisation of this problem to the Key Distribution Pattern problem studied in combinatorial cryptography.
Resumo:
Precise clock synchronization is essential in emerging time-critical distributed control systems operating over computer networks where the clock synchronization requirements are mostly focused on relative clock synchronization and high synchronization precision. Existing clock synchronization techniques such as the Network Time Protocol (NTP) and the IEEE 1588 standard can be difficult to apply to such systems because of the highly precise hardware clocks required, due to network congestion caused by a high frequency of synchronization message transmissions, and high overheads. In response, we present a Time Stamp Counter based precise Relative Clock Synchronization Protocol (TSC-RCSP) for distributed control applications operating over local-area networks (LANs). In our protocol a software clock based on the TSC register, counting CPU cycles, is adopted in the time clients and server. TSC-based clocks offer clients a precise, stable and low-cost clock synchronization solution. Experimental results show that clock precision in the order of 10~microseconds can be achieved in small-scale LAN systems. Such clock precision is much higher than that of a processor's Time-Of-Day clock, and is easily sufficient for most distributed real-time control applications over LANs.
Resumo:
The aim of the study was to explore why the MuPSiNet project - a computer and network supported learning environment for the field of health care and social work - did not develop as expected. To grasp the problem some hypotheses were formulated. The hypotheses regarded the teachers' skills in and attitudes towards computing and their attitudes towards constructivist study methods. An online survey containing 48 items was performed. The survey targeted all the teachers within the field of health care and social work in the country, and it produced 461 responses that were analysed against the hypotheses. The reliability of the variables was tested using the Cronbach alpha coefficient and t-tests. Poor basic computing skills among the teachers combined with a vulnerable technical solution, and inadequate project management combined with lack of administrative models for transforming economic resources into manpower were the factors that turned out to play a decisive role in the project. Other important findings were that the teachers had rather poor skills and knowledge in computing, computer safety and computer supported instruction, and that these skills were significantly poorer among female teachers who were in majority in the sample. The fraction of teachers who were familiar with software for electronic patient records (EPR) was low. The attitudes towards constructivist teaching methods were positive, and further education seemed to utterly increase the teachers' readiness to use alternative teaching methods. The most important conclusions were the following: In order to integrate EPR software as a natural tool in teaching planning and documenting health care, it is crucial that the teachers have sufficient basic skills in computing and that more teachers have personal experience of using EPR software. In order for computer supported teaching to become accepted it is necessary to arrange with extensive further education for the teachers presently working, and for that further education to succeed it should be backed up locally among other things by sufficient support in matters concerning computer supported teaching. The attitudes towards computing showed significant gender differences. Based on the findings it is suggested that basic skills in computing should also include an awareness of data safety in relation to work in different kinds of computer networks, and that projects of this kind should be built up around a proper project organisation with sufficient resources. Suggestions concerning curricular development and further education are also presented. Conclusions concerning the research method were that reminders have a better effect, and that respondents tend to answer open-ended questions more verbosely in electronically distributed online surveys compared to traditional surveys. A method of utilising randomized passwords to guarantee respondent anonymity while maintaining sample control is presented. Keywords: computer-assisted learning, computer-assisted instruction, health care, social work, vocational education, computerized patient record, online survey
Resumo:
Multi-access techniques are widely used in computer networking and distributed multiprocessor systems. On-the-fly arbitration schemes permit one of the many contenders to access the medium without collisions. Serial arbitration is cost effective but is slow and hence unsuitable for high-speed multiprocessor environments supporting very high data transfer rates. A fully parallel arbitration scheme takes less time but is not practically realisable for large numbers of contenders. In this paper, a generalised parallel-serial scheme is proposed which significantly reduces the arbitration time and is practically realisable.
Resumo:
Delay and disruption tolerant networks (DTNs) are computer networks where round trip delays and error rates are high and disconnections frequent. Examples of these extreme networks are space communications, sensor networks, connecting rural villages to the Internet and even interconnecting commodity portable wireless devices and mobile phones. Basic elements of delay tolerant networks are a store-and-forward message transfer resembling traditional mail delivery, an opportunistic and intermittent routing, and an extensible cross-region resource naming service. Individual nodes of the network take an active part in routing the traffic and provide in-network data storage for application data that flows through the network. Application architecture for delay tolerant networks differs also from those used in traditional networks. It has become feasible to design applications that are network-aware and opportunistic, taking an advantage of different network connection speeds and capabilities. This might change some of the basic paradigms of network application design. DTN protocols will also support in designing applications which depend on processes to be persistent over reboots and power failures. DTN protocols could also be applicable to traditional networks in cases where high tolerance to delays or errors would be desired. It is apparent that challenged networks also challenge the traditional strictly layered model of network application design. This thesis provides an extensive introduction to delay tolerant networking concepts and applications. Most attention is given to challenging problems of routing and application architecture. Finally, future prospects of DTN applications and implementations are envisioned through recent research results and an interview with an active researcher of DTN networks.