952 resultados para PACKET-SWITCHED NETWORK
Resumo:
Trust can be used for neighbor formation to generate automated recommendations. User assigned explicit rating data can be used for this purpose. However, the explicit rating data is not always available. In this paper we present a new method of generating trust network based on user’s interest similarity. To identify the interest similarity, we use user’s personalized tag information. This trust network can be used to find the neighbors to make automated recommendation. Our experiment result shows that the precision of the proposed method outperforms the traditional collaborative filtering approach.
Resumo:
know personally. They also communicate with other members of the network who are the friends of their friends and may be friends of their friend’s network. They share their experiences and opinions within the social network about an item which may be a product or service. The user faces the problem of evaluating trust in a service or service provider before making a choice. Opinions, reputations and ecommendations will influence users' choice and usage of online resources. Recommendations may be received through a chain of friends of friends, so the problem for the user is to be able to evaluate various types of trust recommendations and reputations. This opinion or ecommendation has a great influence to choose to use or enjoy the item by the other user of the community. Users share information on the level of trust they explicitly assign to other users. This trust can be used to determine while taking decision based on any recommendation. In case of the absence of direct connection of the recommender user, propagated trust could be useful.
Resumo:
Different from conventional methods for structural reliability evaluation, such as, first/second-order reliability methods (FORM/SORM) or Monte Carlo simulation based on corresponding limit state functions, a novel approach based on dynamic objective oriented Bayesian network (DOOBN) for prediction of structural reliability of a steel bridge element has been proposed in this paper. The DOOBN approach can effectively model the deterioration processes of a steel bridge element and predict their structural reliability over time. This approach is also able to achieve Bayesian updating with observed information from measurements, monitoring and visual inspection. Moreover, the computational capacity embedded in the approach can be used to facilitate integrated management and maintenance optimization in a bridge system. A steel bridge girder is used to validate the proposed approach. The predicted results are compared with those evaluated by FORM method.
Resumo:
The broad research questions of the book are: How can successful, interdisciplinary collaboration contribute to research innovation through Practice-led research? What contributes to the design, production and curation of successful new media art? What are the implications of exhibiting it across dual sites for artists, curators and participant audiences? Is it possible to create an 'intimate transaction' between people who are separated by vast distances but joined by interfaces and distributed networks? Centred on a new media work of the same name by the Transmute Collective (led by Keith Armstrong), this book provides insights from multidisciplinary perspectives. Visual, sound and performance artists, furniture designers, spatial architects, technology systems designers, and curators who collaborated in the production of Intimate Transactions discuss their design philosophies, working processes and resolution of this major new media work. Analytical and philosophical essays by international writers complement these writings on production. They consider how new media art, like Intimate Transactions, challenges traditional understandings of art, curatorial installation and exhibition experience because of the need to take into account interaction, the reconfiguration of space, co-presence, performativity and inter-site collaboration.
Resumo:
Scaffolds with open-pore morphologies offer several advantages in cell-based tissue engineering, but their use is limited by a low cell seeding efficiency. We hypothesized that inclusion of a collagen network as filling material within the open-pore architecture of polycaprolactone-tricalcium phosphate (PCL-TCP) scaffolds increases human bone marrow stromal cells (hBMSC) seeding efficiency under perfusion and in vivo osteogenic capacity of the resulting constructs. PCL-TCP scaffolds, rapid prototyped with a honeycomb-like architecture, were filled with a collagen gel and subsequently lyophilized, with or without final crosslinking. Collagen-free scaffolds were used as controls. The seeding efficiency was assessed after overnight perfusion of expanded hBMSC directly through the scaffold pores using a bioreactor system. By seeding and culturing freshly harvested hBMSC under perfusion for 3 weeks, the osteogenic capacity of generated constructs was tested by ectopic implantation in nude mice. The presence of the collagen network, independently of the crosslinking process, significantly increased the cell seeding efficiency (2.5-fold), and reduced the loss of clonogenic cells in the supernatant. Although no implant generated frank bone tissue, possibly due to the mineral distribution within the scaffold polymer phase, the presence of a non crosslinked collagen phase led to in vivo formation of scattered structures of dense osteoids. Our findings verify that the inclusion of a collagen network within open morphology porous scaffolds improves cell retention under perfusion seeding. In the context of cell-based therapies, collagen-filled porous scaffolds are expected to yield superior cell utilization, and could be combined with perfusion-based bioreactor devices to streamline graft manufacture.
Resumo:
This article reviews some key critical writing about the commodification or exploitation of networked social relations in the creative industries. Through a comparative case study of networks in fashion and new media industries in the city of Manchester, UK, the article draws attention to the social, cultural and aesthetic aspects of the networks among creative practitioners. It argues that within the increasing commercialisation in the creative industries there are networked spaces within which non-instrumental values are created. The building of social networks reflects on the issue of how creatives perceive their work in these industries both economically and socially/culturally.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics
Resumo:
In fault detection and diagnostics, limitations coming from the sensor network architecture are one of the main challenges in evaluating a system’s health status. Usually the design of the sensor network architecture is not solely based on diagnostic purposes, other factors like controls, financial constraints, and practical limitations are also involved. As a result, it quite common to have one sensor (or one set of sensors) monitoring the behaviour of two or more components. This can significantly extend the complexity of diagnostic problems. In this paper a systematic approach is presented to deal with such complexities. It is shown how the problem can be formulated as a Bayesian network based diagnostic mechanism with latent variables. The developed approach is also applied to the problem of fault diagnosis in HVAC systems, an application area with considerable modeling and measurement constraints.