893 resultados para ARPANET (Computer network)
Resumo:
The Streaming SIMD extension (SSE) is a special feature embedded in the Intel Pentium III and IV classes of microprocessors. It enables the execution of SIMD type operations to exploit data parallelism. This article presents improving computation performance of a railway network simulator by means of SSE. Voltage and current at various points of the supply system to an electrified railway line are crucial for design, daily operation and planning. With computer simulation, their time-variations can be attained by solving a matrix equation, whose size mainly depends upon the number of trains present in the system. A large coefficient matrix, as a result of congested railway line, inevitably leads to heavier computational demand and hence jeopardizes the simulation speed. With the special architectural features of the latest processors on PC platforms, significant speed-up in computations can be achieved.
Resumo:
Streaming SIMD Extensions (SSE) is a unique feature embedded in the Pentium III class of microprocessors. By fully exploiting SSE, parallel algorithms can be implemented on a standard personal computer and a theoretical speedup of four can be achieved. In this paper, we demonstrate the implementation of a parallel LU matrix decomposition algorithm for solving power systems network equations with SSE and discuss advantages and disadvantages of this approach.
Resumo:
Online dating networks, a type of social network, are gaining popularity. With many people joining and being available in the network, users are overwhelmed with choices when choosing their ideal partners. This problem can be overcome by utilizing recommendation methods. However, traditional recommendation methods are ineffective and inefficient for online dating networks where the dataset is sparse and/or large and two-way matching is required. We propose a methodology by using clustering, SimRank to recommend matching candidates to users in an online dating network. Data from a live online dating network is used in evaluation. The success rate of recommendation obtained using the proposed method is compared with baseline success rate of the network and the performance is improved by double.
Resumo:
Due to the change in attitudes and lifestyles, people expect to find new partners and friends via various ways now-a-days. Online dating networks create a network for people to meet each other and allow making contact with different objectives of developing a personal, romantic or sexual relationship. Due to the higher expectation of users, online matching companies are trying to adopt recommender systems. However, the existing recommendation techniques such as content-based, collaborative filtering or hybrid techniques focus on users explicit contact behaviors but ignore the implicit relationship among users in the network. This paper proposes a social matching system that uses past relations and user similarities in finding potential matches. The proposed system is evaluated on the dataset collected from an online dating network. Empirical analysis shows that the recommendation success rate has increased to 31% as compared to the baseline success rate of 19%.
Resumo:
The paper provides an assessment of the performance of commercial Real Time Kinematic (RTK) systems over longer than recommended inter-station distances. The experiments were set up to test and analyse solutions from the i-MAX, MAX and VRS systems being operated with three triangle shaped network cells, each having an average inter-station distance of 69km, 118km and 166km. The performance characteristics appraised included initialization success rate, initialization time, RTK position accuracy and availability, ambiguity resolution risk and RTK integrity risk in order to provide a wider perspective of the performance of the testing systems. ----- ----- The results showed that the performances of all network RTK solutions assessed were affected by the increase in the inter-station distances to similar degrees. The MAX solution achieved the highest initialization success rate of 96.6% on average, albeit with a longer initialisation time. Two VRS approaches achieved lower initialization success rate of 80% over the large triangle. In terms of RTK positioning accuracy after successful initialisation, the results indicated a good agreement between the actual error growth in both horizontal and vertical components and the accuracy specified in the RMS and part per million (ppm) values by the manufacturers. ----- ----- Additionally, the VRS approaches performed better than the MAX and i-MAX when being tested under the standard triangle network with a mean inter-station distance of 69km. However as the inter-station distance increases, the network RTK software may fail to generate VRS correction and then may turn to operate in the nearest single-base RTK (or RAW) mode. The position uncertainty reached beyond 2 meters occasionally, showing that the RTK rover software was using an incorrect ambiguity fixed solution to estimate the rover position rather than automatically dropping back to using an ambiguity float solution. Results identified that the risk of incorrectly resolving ambiguities reached 18%, 20%, 13% and 25% for i-MAX, MAX, Leica VRS and Trimble VRS respectively when operating over the large triangle network. Additionally, the Coordinate Quality indicator values given by the Leica GX1230 GG rover receiver tended to be over-optimistic and not functioning well with the identification of incorrectly fixed integer ambiguity solutions. In summary, this independent assessment has identified some problems and failures that can occur in all of the systems tested, especially when being pushed beyond the recommended limits. While such failures are expected, they can offer useful insights into where users should be wary and how manufacturers might improve their products. The results also demonstrate that integrity monitoring of RTK solutions is indeed necessary for precision applications, thus deserving serious attention from researchers and system providers.
Resumo:
Personalised social matching systems can be seen as recommender systems that recommend people to others in the social networks. However, with the rapid growth of users in social networks and the information that a social matching system requires about the users, recommender system techniques have become insufficiently adept at matching users in social networks. This paper presents a hybrid social matching system that takes advantage of both collaborative and content-based concepts of recommendation. The clustering technique is used to reduce the number of users that the matching system needs to consider and to overcome other problems from which social matching systems suffer, such as cold start problem due to the absence of implicit information about a new user. The proposed system has been evaluated on a dataset obtained from an online dating website. Empirical analysis shows that accuracy of the matching process is increased, using both user information (explicit data) and user behavior (implicit data).
Resumo:
Trust can be used for neighbor formation to generate automated recommendations. User assigned explicit rating data can be used for this purpose. However, the explicit rating data is not always available. In this paper we present a new method of generating trust network based on user’s interest similarity. To identify the interest similarity, we use user’s personalized tag information. This trust network can be used to find the neighbors to make automated recommendation. Our experiment result shows that the precision of the proposed method outperforms the traditional collaborative filtering approach.
Resumo:
know personally. They also communicate with other members of the network who are the friends of their friends and may be friends of their friend’s network. They share their experiences and opinions within the social network about an item which may be a product or service. The user faces the problem of evaluating trust in a service or service provider before making a choice. Opinions, reputations and ecommendations will influence users' choice and usage of online resources. Recommendations may be received through a chain of friends of friends, so the problem for the user is to be able to evaluate various types of trust recommendations and reputations. This opinion or ecommendation has a great influence to choose to use or enjoy the item by the other user of the community. Users share information on the level of trust they explicitly assign to other users. This trust can be used to determine while taking decision based on any recommendation. In case of the absence of direct connection of the recommender user, propagated trust could be useful.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics
Resumo:
Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.
Resumo:
This paper presents a “research frame” which we have found useful in analyzing complex socio- technical situations. The research frame is based on aspects of actor-network theory: “interressment”, “enrollment”, “points of passage” and the “trial of strength”. Each of these aspects are described in turn, making clear their purpose in the overall research frame. Having established the research frame it is used to analyse two examples. First, the use of speech recognition technology is examined in two different contexts, showing how to apply the frame to compare and contrast current situations. Next, a current medical consultation context is described and the research frame is used to consider how it could change with innovative technology. In both examples, the research frame shows that the use of an artefact or technology must be considered together with the context in which it is used.
Resumo:
Networked control systems (NCSs) offer many advantages over conventional control; however, they also demonstrate challenging problems such as network-induced delay and packet losses. This paper proposes an approach of predictive compensation for simultaneous network-induced delays and packet losses. Different from the majority of existing NCS control methods, the proposed approach addresses co-design of both network and controller. It also alleviates the requirements of precise process models and full understanding of NCS network dynamics. For a series of possible sensor-to-actuator delays, the controller computes a series of corresponding redundant control values. Then, it sends out those control values in a single packet to the actuator. Once receiving the control packet, the actuator measures the actual sensor-to-actuator delay and computes the control signals from the control packet. When packet dropout occurs, the actuator utilizes past control packets to generate an appropriate control signal. The effectiveness of the approach is demonstrated through examples.
Resumo:
Online social networks can be found everywhere from chatting websites like MSN, blogs such as MySpace to social media such as YouTube and second life. Among them, there is one interesting type of online social networks, online dating network that is growing fast. This paper analyzes an online dating network from social network analysis point of view. Observations are made and results are obtained in order to suggest a better recommendation system for people-to-people networks.