836 resultados para Research networks


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a Tokamak fusion experiment. The Tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the Tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most Tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multi-layer perceptron, using a hybrid of digital and analogue technology, has been developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Using electricity load data and training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise and forgetting factors for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. We also find that a recently-proposed alternative novelty criterion, found to be more robust in stationary environments, does not fare so well in the non-stationary case due to the need for filter adaptability during training.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Online model order complexity estimation remains one of the key problems in neural network research. The problem is further exacerbated in situations where the underlying system generator is non-stationary. In this paper, we introduce a novelty criterion for resource allocating networks (RANs) which is capable of being applied to both stationary and slowly varying non-stationary problems. The deficiencies of existing novelty criteria are discussed and the relative performances are demonstrated on two real-world problems : electricity load forecasting and exchange rate prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a study of the generation of topographic mappings - dimension reducing transformations of data that preserve some element of geometric structure - with feed-forward neural networks. As an alternative to established methods, a transformational variant of Sammon's method is proposed, where the projection is effected by a radial basis function neural network. This approach is related to the statistical field of multidimensional scaling, and from that the concept of a 'subjective metric' is defined, which permits the exploitation of additional prior knowledge concerning the data in the mapping process. This then enables the generation of more appropriate feature spaces for the purposes of enhanced visualisation or subsequent classification. A comparison with established methods for feature extraction is given for data taken from the 1992 Research Assessment Exercise for higher educational institutions in the United Kingdom. This is a difficult high-dimensional dataset, and illustrates well the benefit of the new topographic technique. A generalisation of the proposed model is considered for implementation of the classical multidimensional scaling (¸mds}) routine. This is related to Oja's principal subspace neural network, whose learning rule is shown to descend the error surface of the proposed ¸mds model. Some of the technical issues concerning the design and training of topographic neural networks are investigated. It is shown that neural network models can be less sensitive to entrapment in the sub-optimal global minima that badly affect the standard Sammon algorithm, and tend to exhibit good generalisation as a result of implicit weight decay in the training process. It is further argued that for ideal structure retention, the network transformation should be perfectly smooth for all inter-data directions in input space. Finally, there is a critique of optimisation techniques for topographic mappings, and a new training algorithm is proposed. A convergence proof is given, and the method is shown to produce lower-error mappings more rapidly than previous algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture Density Networks (MDNs) are a well-established method for modelling the conditional probability density which is useful for complex multi-valued functions where regression methods (such as MLPs) fail. In this paper we extend earlier research of a regularisation method for a special case of MDNs to the general case using evidence based regularisation and we show how the Hessian of the MDN error function can be evaluated using R-propagation. The method is tested on two data sets and compared with early stopping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This technical report contains all technical information and results from experiments where Mixture Density Networks (MDN) using an RBF network and fixed kernel means and variances were used to infer the wind direction from satellite data from the ersII weather satellite. The regularisation is based on the evidence framework and three different approximations were used to estimate the regularisation parameter. The results were compared with the results by `early stopping'.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a theoretical paper that examines the interplay between individual and collective capabilities and competencies and value transactions in collaborative environments. The theory behind value creation is examined and two types of value are identified, internal value (Shareholder value) and external value (Value proposition). The literature on collaborative enterprises/network is also examined with particular emphasis on supply chains, extended/virtual enterprises and clusters as representatives of different forms and maturities of collaboration. The interplay of value transactions and competencies and capabilities are examined and discussed in detail. Finally, a model is presented which consists of value transactions and a table which compares the characteristics of different types of collaborative enterprises/networks. It is proposed that this model presents a platform for further research to develop an in-depth understanding into how value may be created and managed in collaborative enterprises/networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to investigate how research and development (R&D) collaboration takes place for complex new products in the automotive sector. The research aims to give guidelines to increase the effectiveness of such collaborations. Design/methodology/approach – The methodology used to investigate this issue was grounded theory. The empirical data were collected through a mixture of interviews and questionnaires. The resulting inducted conceptual models were subsequently validated in industrial workshops. Findings – The findings show that frontloading of the collaborative members was a major issue in managing successful R&D collaborations. Research limitations/implications – The limitation of this research is that it is only based in the German automotive industry. Practical implications – Practical implications have come out of this research. Models and guidelines are given to help make a success of collaborative projects and their potential impacts on time, cost and quality metrics. Originality/value – Frontloading is not often studied in a collaborative manner; it is normally studied within just one organisation. This study has novel value because it has involved a number of different members throughout the supplier network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ad hoc wireless sensor networks (WSNs) are formed from self-organising configurations of distributed, energy constrained, autonomous sensor nodes. The service lifetime of such sensor nodes depends on the power supply and the energy consumption, which is typically dominated by the communication subsystem. One of the key challenges in unlocking the potential of such data gathering sensor networks is conserving energy so as to maximize their post deployment active lifetime. This thesis described the research carried on the continual development of the novel energy efficient Optimised grids algorithm that increases the WSNs lifetime and improves on the QoS parameters yielding higher throughput, lower latency and jitter for next generation of WSNs. Based on the range and traffic relationship the novel Optimised grids algorithm provides a robust traffic dependent energy efficient grid size that minimises the cluster head energy consumption in each grid and balances the energy use throughout the network. Efficient spatial reusability allows the novel Optimised grids algorithm improves on network QoS parameters. The most important advantage of this model is that it can be applied to all one and two dimensional traffic scenarios where the traffic load may fluctuate due to sensor activities. During traffic fluctuations the novel Optimised grids algorithm can be used to re-optimise the wireless sensor network to bring further benefits in energy reduction and improvement in QoS parameters. As the idle energy becomes dominant at lower traffic loads, the new Sleep Optimised grids model incorporates the sleep energy and idle energy duty cycles that can be implemented to achieve further network lifetime gains in all wireless sensor network models. Another key advantage of the novel Optimised grids algorithm is that it can be implemented with existing energy saving protocols like GAF, LEACH, SMAC and TMAC to further enhance the network lifetimes and improve on QoS parameters. The novel Optimised grids algorithm does not interfere with these protocols, but creates an overlay to optimise the grids sizes and hence transmission range of wireless sensor nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this article is to shed some light on the way in which actor network theory (ANT) might contribute to case research in accounting. The paper will seek to explain some of the theoretical suppositions which are commonly associated with ANT and which have so far made little impact on the accounting literature. At the same time the accounting literature has shown a particular reluctance to engage with the central concept of ANT which Lee and Hassard characterise as the desire to bring together the "human and non-human, social and technical factors in the same analytical view". The article also features a discussion of a research project which used an approach giving emphasis to both humans and objects in order to understand how ``facts'' have come to be settled as they are. In taking such views into the research it is hoped to provide insight into both the detail of accounting as it is practised within organisations and the manner in which human actors and objects of technology may combine to constitute networks within organisations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been suggested that, in order to maintain its relevance, critical research must develop a strong emphasis on empirical work rather than the conceptual emphasis that has typically characterized critical scholarship in management. A critical project of this nature is applicable in the information systems (IS) arena, which has a growing tradition of qualitative inquiry. Despite its relativist ontology, actor–network theory places a strong emphasis on empirical inquiry and this paper argues that actor–network theory, with its careful tracing and recording of heterogeneous networks, is well suited to the generation of detailed and contextual empirical knowledge about IS. The intention in this paper is to explore the relevance of IS research informed by actor–network theory in the pursuit of a broader critical research project as de? ned in earlier work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research suggests that changing consumer and producer knowledge structures play a role in market evolution and that the sociocognitive processes of product markets are revealed in the sensemaking stories of market actors that are rebroadcasted in commercial publications. In this article, the authors lend further support to the story-based nature of market sensemaking and the use of the sociocognitive approach in explaining the evolution of high-technology markets. They examine the content (i.e., subject matter or topic) and volume (i.e., the number) of market stories and the extent to which content and volume of market stories evolve as a technology emerges. Data were obtained from a content analysis of 10,412 article abstracts, published in key trade journals, pertaining to Local Area Network (LAN) technologies and spanning the period 1981 to 2000. Hypotheses concerning the evolving nature (content and volume) of market stories in technology evolution are tested. The analysis identified four categories of market stories - technical, product availability, product adoption, and product discontinuation. The findings show that the emerging technology passes initially through a 'technical-intensive' phase whereby technology related stories dominate, through a 'supply-push' phase, in which stories presenting products embracing the technology tend to exceed technical stories while there is a rise in the number of product adoption reference stories, to a 'product-focus' phase, with stories predominantly focusing on product availability. Overall story volume declines when a technology matures as the need for sensemaking reduces. When stories about product discontinuation surface, these signal the decline of current technology. New technologies that fail to maintain the 'product-focus' stage also reflect limited market acceptance. The article also discusses the theoretical and managerial implications of the study's findings. © 2002 Elsevier Science Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to investigate the use of 802.11e MAC to resolve the transmission control protocol (TCP) unfairness. Design/methodology/approach: The paper shows how a TCP sender may adapt its transmission rate using the number of hops and the standard deviation of recently measured round-trip times to address the TCP unfairness. Findings: Simulation results show that the proposed techniques provide even throughput by providing TCP fairness as the number of hops increases over a wireless mesh network (WMN). Research limitations/implications: Future work will examine the performance of TCP over routing protocols, which use different routing metrics. Other future work is scalability over WMNs. Since scalability is a problem with communication in multi-hop, carrier sense multiple access (CSMA) will be compared with time division multiple access (TDMA) and a hybrid of TDMA and code division multiple access (CDMA) will be designed that works with TCP and other traffic. Finally, to further improve network performance and also increase network capacity of TCP for WMNs, the usage of multiple channels instead of only a single fixed channel will be exploited. Practical implications: By allowing the tuning of the 802.11e MAC parameters that have previously been constant in 802.11 MAC, the paper proposes the usage of 802.11e MAC on a per class basis by collecting the TCP ACK into a single class and a novel congestion control method for TCP over a WMN. The key feature of the proposed TCP algorithm is the detection of congestion by measuring the fluctuation of RTT of the TCP ACK samples via the standard deviation, plus the combined the 802.11e AIFS and CWmin allowing the TCP ACK to be prioritised which allows the TCP ACKs will match the volume of the TCP data packets. While 802.11e MAC provides flexibility and flow/congestion control mechanism, the challenge is to take advantage of these features in 802.11e MAC. Originality/value: With 802.11 MAC not having flexibility and flow/congestion control mechanisms implemented with TCP, these contribute to TCP unfairness with competing flows. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead ofbeing another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. There are many kinds of protocols that work over WMNs, such as IEEE 802.11a/b/g, 802.15 and 802.16. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. While transmission rate is a significant part, only a few algorithms such as Auto Rate Fallback (ARF) or Receiver Based Auto Rate (RBAR) have been published. In this paper we will show MAC, packet loss and physical layer conditions play important role for having good channel condition. Also we perform rate adaption along with multiple packet transmission for better throughput. By allowing for dynamically monitored, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria improvements in performance can be obtained. The proposed method is the detection of channel congestion by measuring the fluctuation of signal to the standard deviation of and the detection of packet loss before channel performance diminishes. We will show that the use of such techniques in WMN can significantly improve performance. The effectiveness of the proposed method is presented in an experimental wireless network testbed via packet-level simulation. Our simulation results show that regardless of the channel condition we were to improve the performance in the throughput.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Culture defines collective behavior and interactions among people in groups. In organizations, it shapes group identity, work pattern, communication schemes, and interpersonal relations. Any change in organizational culture will lead to changes in these elements of organizational factors, and vice versa. From a managerial standpoint, how to cultivate an organizational culture that would enhance these aforementioned elements in organizational workplace should thus be taken into serious consideration. Based on cases studies in two hospitals, this paper investigates how organizational culture is shaped by a particular type of information and communication technology, wireless networks, a topic that is generally overlooked by the mainstream research community, and in turn implicates how such cultural changes in organizations renovate their competitiveness in the marketplace. Lessons learned from these cases provide valuable insights to emerging IT management and culture studies in general and in wireless network management in the healthcare sector in particular.