92 resultados para Actor-network theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: In 2011, the United Kingdom launched five Public Health Responsibility Deal Networks inspired by ‘nudge theory’ to facilitate healthy-lifestyle behaviors. This study used Q methodology to examine stakeholders’ views about responsibility and accountability for healthy food environments to reduce obesity and diet-related chronic diseases. Design: A purposive sample of policy elites (n=31) from government, academia, food industry and civil society sorted 48 statements grounded in three theoretical perspectives (i.e., legitimacy, nudge and public health law). Factor analysis identified intra-individual statement sorting differences. Results: A three-factor solution explained 64 percent of the variance across three distinct viewpoints: food environment protectors (n=17) underscored government responsibility to address unhealthy food environments; partnership pioneers (n=12) recognized government-industry partnerships as legitimate; and the commercial market defenders (n=1) emphasized individual responsibility for food choices and rejected any government intervention. Conclusions: Building trust and strengthening accountability structures may help stakeholders navigate differences to engage in constructive actions. This research may inform efforts in other countries where voluntary industry partnerships are pursued to address unhealthy food environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 Network coding has shown the promise of significant throughput improvement. In this paper, we study the network throughput using network coding and explore how the maximum throughput can be achieved in a two-way relay wireless network. Unlike previous studies, we consider a more general network with arbitrary structure of overhearing status between receivers and transmitters. To efficiently utilize the coding opportunities, we invent the concept of network coding cliques (NCCs), upon which a formal analysis on the network throughput using network coding is elaborated. In particular, we derive the closed-form expression of the network throughput under certain traffic load in a slotted ALOHA network with basic medium access control. Furthermore, the maximum throughput as well as optimal medium access probability at each node is studied under various network settings. Our theoretical findings have been validated by simulation as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fibers growing, branching, and bundling are essential for the development of crystalline fiber networks of molecular gels. In this work, for two typical crystalline fiber networks, i.e. the network of spherulitic domains and the interconnected fibers network, related kinetic information is obtained using dynamic rheological measurements and analysis in terms of the Avrami theory. In combination with microstructure characterizations, we establish the correlation of the Avrami derived kinetic parameter not only with the nucleation nature and growth dimensionality of fibers and branches, but also with the fiber bundles induced by fiber-fiber interactions. Our study highlights the advantage of simple dynamic rheological measurements over other spectroscopic methods used in previous studies for providing more kinetic information on fiber-fiber interactions, enabling the Avrami analyses to extract distinct kinetic features not only for fibers growing and branching, but also for bundling in the creation of strong interconnected fibers networks. This work may be helpful for the implementation of precise kinetic control of crystalline fiber network formations for achieving desirable microstructures and rheological properties for advanced applications of gel materials. This journal is © the Partner Organisations 2014.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the arrival of big data era, the Internet traffic is growing exponentially. A wide variety of applications arise on the Internet and traffic classification is introduced to help people manage the massive applications on the Internet for security monitoring and quality of service purposes. A large number of Machine Learning (ML) algorithms are introduced to deal with traffic classification. A significant challenge to the classification performance comes from imbalanced distribution of data in traffic classification system. In this paper, we proposed an Optimised Distance-based Nearest Neighbor (ODNN), which has the capability of improving the classification performance of imbalanced traffic data. We analyzed the proposed ODNN approach and its performance benefit from both theoretical and empirical perspectives. A large number of experiments were implemented on the real-world traffic dataset. The results show that the performance of “small classes” can be improved significantly even only with small number of training data and the performance of “large classes” remains stable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multicast is an important mechanism in modern wireless networks and has attracted significant efforts to improve its performance with different metrics including throughput, delay, energy efficiency, etc. Traditionally, an ideal loss-free channel model is widely used to facilitate routing protocol design. However, the quality of wireless links is affected or even jeopardized resulting in transmission failures by many factors like collisions, fading or the noise of environment. In this paper, we propose a reliable multicast protocol, called CodePipe, with energy-efficiency, high throughput and fairness in lossy wireless networks. Building upon opportunistic routing and random linear network coding, CodePipe can not only eliminate coordination between nodes, but also improve the multicast throughput significantly by exploiting both intra-batch and inter-batch coding opportunities. In particular, four key techniques, namely, LP-based opportunistic routing structure, opportunistic feeding, fast batch moving and inter-batch coding, are proposed to offer significant improvement in throughput, energy-efficiency and fairness.Moreover, we design an efficient online extension of CodePipe such that it can work in a dynamic network where nodes join and leave the network as time progresses. We evaluate CodePipe on ns2 simulator by comparing with other two state-of-art multicast protocols,MORE and Pacifier. Simulation results show that CodePipe significantly outperforms both of them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a fundamental tool for network management and security, traffic classification has attracted increasing attention in recent years. A significant challenge to the robustness of classification performance comes from zero-day applications previously unknown in traffic classification systems. In this paper, we propose a new scheme of Robust statistical Traffic Classification (RTC) by combining supervised and unsupervised machine learning techniques to meet this challenge. The proposed RTC scheme has the capability of identifying the traffic of zero-day applications as well as accurately discriminating predefined application classes. In addition, we develop a new method for automating the RTC scheme parameters optimization process. The empirical study on real-world traffic data confirms the effectiveness of the proposed scheme. When zero-day applications are present, the classification performance of the new scheme is significantly better than four state-of-the-art methods: random forest, correlation-based classification, semi-supervised clustering, and one-class SVM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless mesh networks are widely applied in many fields such as industrial controlling, environmental monitoring, and military operations. Network coding is promising technology that can improve the performance of wireless mesh networks. In particular, network coding is suitable for wireless mesh networks as the fixed backbone of wireless mesh is usually unlimited energy. However, coding collision is a severe problem affecting network performance. To avoid this, routing should be effectively designed with an optimum combination of coding opportunity and coding validity. In this paper, we propose a Connected Dominating Set (CDS)-based and Flow-oriented Coding-aware Routing (CFCR) mechanism to actively increase potential coding opportunities. Our work provides two major contributions. First, it effectively deals with the coding collision problem of flows by introducing the information conformation process, which effectively decreases the failure rate of decoding. Secondly, our routing process considers the benefit of CDS and flow coding simultaneously. Through formalized analysis of the routing parameters, CFCR can choose optimized routing with reliable transmission and small cost. Our evaluation shows CFCR has a lower packet loss ratio and higher throughput than existing methods, such as Adaptive Control of Packet Overhead in XOR Network Coding (ACPO), or Distributed Coding-Aware Routing (DCAR).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a Q-learning based controller for a network of multi intersections. According to the increasing amount of traffic congestion in modern cities, using an efficient control system is demanding. The proposed controller designed to adjust the green time for traffic signals by the aim of reducing the vehicles’ travel delay time in a multi-intersection network. The designed system is a distributed traffic timing control model, applies individual controller for each intersection. Each controller adjusts its own intersection’s congestion while attempt to reduce the travel delay time in whole traffic network. The results of experiments indicate the satisfied efficiency of the developed distributed Q-learning controller.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artificial neural network (ANN) models are able to predict future events based on current data. The usefulness of an ANN lies in the capacity of the model to learn and adjust the weights following previous errors during training. In this study, we carefully analyse the existing methods in neuronal spike sorting algorithms. The current methods use clustering as a basis to establish the ground truths, which requires tedious procedures pertaining to feature selection and evaluation of the selected features. Even so, the accuracy of clusters is still questionable. Here, we develop an ANN model to specially address the present drawbacks and major challenges in neuronal spike sorting. New enhancements are introduced into the conventional backpropagation ANN for determining the network weights, input nodes, target node, and error calculation. Coiflet modelling of noise is employed to enhance the spike shape features and overshadow noise. The ANN is used in conjunction with a special spiking event detection technique to prioritize the targets. The proposed enhancements are able to bolster the training concept, and on the whole, contributing to sorting neuronal spikes with close approximations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In cyber physical system (CPS), computational resources and physical resources are strongly correlated and mutually dependent. Cascading failures occur between coupled networks, cause the system more fragile than single network. Besides widely used metric giant component, we study small cluster (small component) in interdependent networks after cascading failures occur. We first introduce an overview on how small clusters distribute in various single networks. Then we propose a percolation theory based mathematical method to study how small clusters be affected by the interdependence between two coupled networks. We prove that the upper bounds exist for both the fraction and the number of operating small clusters. Without loss of generality, we apply both synthetic network and real network data in simulation to study small clusters under different interdependence models and network topologies. The extensive simulations highlight our findings: except the giant component, considerable proportion of small clusters exists, with the remaining part fragmenting to very tiny pieces or even massive isolated single vertex; no matter how the two networks are tightly coupled, an upper bound exists for the size of small clusters. We also discover that the interdependent small-world networks generally have the highest fractions of operating small clusters. Three attack strategies are compared: Inter Degree Priority Attack, Intra Degree Priority Attack and Random Attack. We observe that the fraction of functioning small clusters keeps stable and is independent from the attack strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing complexity of computer systems and communication networks induces tremendous requirements for trust and security. This special issue includes topics on trusted computing, risk and reputation management, network security and survivable computer systems/networks. These issues have evolved into an active and important area of research and development. The past decade has witnessed a proliferation of concurrency and computation systems for practice of highly trust, security and privacy, which has become a key subject in determining future research and development activities in many academic and industrial branches. This special issue aims to present and discuss advances of current research and development in all aspects of trusted computing and network security. In addition, this special issue provides snapshots of contemporary academia work in the field of network trusted computing. We prepared and organized this special issue to record state-of-the-art research, novel development and trends for future insight in this domain. In this special issue, 14 papers have been accepted for publication, which demonstrate novel and original work in this field. A detailed overview of the selected works is given below.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a supervised fuzzy adaptive resonance theory neural network, i.e., Fuzzy ARTMAP (FAM), is integrated with a heuristic Gravitational Search Algorithm (GSA) that is inspired from the laws of Newtonian gravity. The proposed FAM-GSA model combines the unique features of both constituents to perform data classification. The classification performance of FAM-GSA is benchmarked against other state-of-art machine learning classifiers using an artificially generated data set and two real data sets from different domains. Comparatively, the empirical results indicate that FAM-GSA generally is able to achieve a better classification performance with a parsimonious network size, but with the expense of a higher computational load.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The idea of meta-cognitive learning has enriched the landscape of evolving systems, because it emulates three fundamental aspects of human learning: what-to-learn; how-to-learn; and when-to-learn. However, existing meta-cognitive algorithms still exclude Scaffolding theory, which can realize a plug-and-play classifier. Consequently, these algorithms require laborious pre- and/or post-training processes to be carried out in addition to the main training process. This paper introduces a novel meta-cognitive algorithm termed GENERIC-Classifier (gClass), where the how-to-learn part constitutes a synergy of Scaffolding Theory - a tutoring theory that fosters the ability to sort out complex learning tasks, and Schema Theory - a learning theory of knowledge acquisition by humans. The what-to-learn aspect adopts an online active learning concept by virtue of an extended conflict and ignorance method, making gClass an incremental semi-supervised classifier, whereas the when-to-learn component makes use of the standard sample reserved strategy. A generalized version of the Takagi-Sugeno Kang (TSK) fuzzy system is devised to serve as the cognitive constituent. That is, the rule premise is underpinned by multivariate Gaussian functions, while the rule consequent employs a subset of the non-linear Chebyshev polynomial. Thorough empirical studies, confirmed by their corresponding statistical tests, have numerically validated the efficacy of gClass, which delivers better classification rates than state-of-the-art classifiers while having less complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of rock slope stability is a classical problem for geotechnical engineers. However, for practicing engineers, proper software is not usually user friendly, and additional resources capable of providing information useful for decision-making are required. This study developed a convenient tool that can provide a prompt assessment of rock slope stability. A nonlinear input-output mapping of the rock slope system was constructed using a neural network trained by an extreme learning algorithm. The training data was obtained by using finite element upper and lower bound limit analysis methods. The newly developed techniques in this study can either estimate the factor of safety for a rock slope or obtain the implicit parameters through back analyses. Back analysis parameter identification was performed using a terminal steepest descent algorithm based on the finite-time stability theory. This algorithm not only guarantees finite-time error convergence but also achieves exact zero convergence, unlike the conventional steepest descent algorithm in which the training error never reaches zero.

Relevância:

30.00% 30.00%

Publicador: