829 resultados para NETWORK MODEL
Resumo:
A generalised formulation of the mathematical model developed for the analysis of transients in a canal network, under subcritical flow, with any realistic combination of control structures and their multiple operations, has been presented. The model accounts for a large variety of control structures such as weirs, gates, notches etc. discharging under different conditions, namely submerged and unsubmerged. A numerical scheme to compute and approximate steady state flow condition as the initial condition has also been presented. The model can handle complex situations that may arise from multiple gate operations. This has been demonstrated with a problem wherein the boundary conditions change from a gate discharge equation to an energy equation and back to a gate discharge equation. In such a situation the wave strikes a fixed gate and leads to large and rapid fluctuations in both discharge and depth.
Resumo:
In earlier work, nonisomorphic graphs have been converted into networks to realize Multistage Interconnection networks, which are topologically nonequivalent to the Baseline network. The drawback of this technique is that these nonequivalent networks are not guaranteed to be self-routing, because each node in the graph model can be replaced by a (2 × 2) switch in any one of the four different configurations. Hence, the problem of routing in these networks remains unsolved. Moreover, nonisomorphic graphs were obtained by interconnecting bipartite loops in a heuristic manner; the heuristic nature of this procedure makes it difficult to guarantee full connectivity in large networks. We solve these problems through a direct approach, in which a matrix model for self-routing networks is developed. An example is given to show that this model encompases nonequivalent self-routing networks. This approach has the additional advantage in that the matrix model itself ensures full connectivity.
Resumo:
This study is about the challenges of learning in the creation and implementation of new sustainable technologies. The system of biogas production in the Programme of Sustainable Swine Production (3S Programme) conducted by the Sadia food processing company in Santa Catarina State, Brazil, is used as a case example for exploring the challenges, possibilities and obstacles of learning in the use of biogas production as a way to increase the environmental sustainability of swine production. The aim is to contribute to the discussion about the possibilities of developing systems of biogas production for sustainability (BPfS). In the study I develop hypotheses concerning the central challenges and possibilities for developing systems of BPfS in three phases. First, I construct a model of the network of activities involved in the BP for sustainability in the case study. Next, I construct a) an idealised model of the historically evolved concepts of BPfS through an analysis of the development of forms of BP and b) a hypothesis of the current central contradictions within and between the activity systems involved in BP for sustainability in the case study. This hypothesis is further developed through two actual empirical analyses: an analysis of the actors senses in taking part in the system, and an analysis of the disturbance processes in the implementation and operation of the BP system in the 3S Programme. The historical analysis shows that BP for sustainability in the 3S Programme emerged as a feasible solution for the contradiction between environmental protection and concentration, intensification and specialisation in swine production. This contradiction created a threat to the supply of swine to the food processing company. In the food production activity, the contradiction was expressed as a contradiction between the desire of the company to become a sustainable company and the situation in the outsourced farms. For the swine producers the contradiction was expressed between the contradictory rules in which the market exerted pressure which pushed for continual increases in scale, specialisation and concentration to keep the production economically viable, while the environmental rules imposed a limit to this expansion. Although the observed disturbances in the biogas system seemed to be merely technical and localised within the farms, the analysis proposed that these disturbances were formed in and between the activity systems involved in the network of BPfS during the implementation. The disturbances observed could be explained by four contradictions: a) contradictions between the new, more expanded activity of sustainable swine production and the old activity, b) a contradiction between the concept of BP for carbon credits and BP for local use in the BPfS that was implemented, c) contradictions between the new UNFCCC1 methodology for applying for carbon credits and the small size of the farms, and d) between the technologies of biogas use and burning available in the market and the small size of the farms. The main finding of this study relates to the zone of proximal development (ZPD) of the BPfS in Sadia food production chain. The model is first developed as a general model of concepts of BPfS and further developed here to the specific case of the BPfS in the 3S Programme. The model is composed of two developmental dimensions: societal and functional integration. The dimension of societal integration refers to the level of integration with other activities outside the farm. At one extreme, biogas production is self-sufficient and highly independent and the products of BP are consumed within the farm, while at the other extreme BP is highly integrated in markets and networks of collaboration, and BP products are exchanged within the markets. The dimension of functional integration refers to the level of integration between products and production processes so that economies of scope can be achieved by combining several functions using the same utility. At one extreme, BP is specialised in only one product, which allows achieving economies of scale, while at the other extreme there is an integrated production in which several biogas products are produced in order to maximise the outcomes from the BP system. The analysis suggests that BP is moving towards a societal integration, towards the market and towards a functional integration in which several biogas products are combined. The model is a hypothesis to be further tested through interventions by collectively constructing the new proposed concept of BPfS. Another important contribution of this study refers to the concept of the learning challenge. Three central learning challenges for developing a sustainable system of BP in the 3S Programme were identified: 1) the development of cheaper and more practical technologies of burning and measuring the gas, as well as the reduction of costs of the process of certification, 2) the development of new ways of using biogas within farms, and 3) the creation of new local markets and networks for selling BP products. One general learning challenge is to find more varied and synergic ways of using BP products than solely for the production of carbon credits. Both the model of the ZPD of BPfS and the identified learning challenges could be used as learning tools to facilitate the development of biogas production systems. The proposed model of the ZPD could be used to analyse different types of agricultural activities that face a similar contradiction. The findings could be used in interventions to help actors to find their own expansive actions and developmental projects for change. Rather than proposing a standardised best concept of BPfS, the idea of these learning tools is to facilitate the analysis of local situations and to help actors to make their activities more sustainable.
Resumo:
Diabetes is a long-term disease during which the body's production and use of insulin are impaired, causing glucose concentration level to increase in the bloodstream. Regulating blood glucose levels as close to normal as possible leads to a substantial decrease in long-term complications of diabetes. In this paper, an intelligent online feedback-treatment strategy is presented for the control of blood glucose levels in diabetic patients using single network adaptive critic (SNAC) neural networks (which is based on nonlinear optimal control theory). A recently developed mathematical model of the nonlinear dynamics of glucose and insulin interaction in the blood system has been revised and considered for synthesizing the neural network for feedback control. The idea is to replicate the function of pancreatic insulin, i.e. to have a fairly continuous measurement of blood glucose and a situation-dependent insulin injection to the body using an external device. Detailed studies are carried out to analyze the effectiveness of this adaptive critic-based feedback medication strategy. A comparison study with linear quadratic regulator (LQR) theory shows that the proposed nonlinear approach offers some important advantages such as quicker response, avoidance of hypoglycemia problems, etc. Robustness of the proposed approach is also demonstrated from a large number of simulations considering random initial conditions and parametric uncertainties. Copyright (C) 2009 John Wiley & Sons, Ltd.
Monte Carlo simulation of network formation based on structural fragments in epoxy-anhydride systems
Resumo:
A method combining the Monte Carlo technique and the simple fragment approach has been developed for simulating network formation in amine-catalysed epoxy-anhydride systems. The method affords a detailed insight into the nature and composition of the network, showing the distribution of various fragments. It has been used to characterize the network formation in the reaction of the diglycidyl ester of isophthalic acid with hexahydrophthalic anhydride, catalysed by benzyldimethylamine. Pre-gel properties like number and weight distributions and average molecular weights have been calculated as a function of epoxy conversion, leading to a prediction of the gel-point conversion. Analysis of the simulated network further yields other characteristic properties such as concentration of crosslink points, distribution and concentration of elastically active chains, average molecular weight between crosslinks, sol content and mass fraction of pendent chains. A comparison has been made of the properties obtained through simulation with those predicted by the fragment approach alone, which, however, gives only average properties. The Monte Carlo simulation results clearly show that loops and other cyclic structures occur in the gel. This may account for the differences observed between the results of the simulation and the fragment model in the post-gel phase. Copyright (C) 1996 Elsevier Science Ltd.
Resumo:
In this paper, we give a method for probabilistic assignment to the Realistic Abductive Reasoning Model, The knowledge is assumed to be represented in the form of causal chaining, namely, hyper-bipartite network. Hyper-bipartite network is the most generalized form of knowledge representation for which, so far, there has been no way of assigning probability to the explanations, First, the inference mechanism using realistic abductive reasoning model is briefly described and then probability is assigned to each of the explanations so as to pick up the explanations in the decreasing order of plausibility.
Resumo:
Representatives of several Internet access providers have expressed their wish to see a substantial change in the pricing policies of the Internet. In particular, they would like to see content providers pay for use of the network, given the large amount of resources they use. This would be in clear violation of the �network neutrality� principle that had characterized the development of the wireline Internet. Our first goal in this paper is to propose and study possible ways of implementing such payments and of regulating their amount. We introduce a model that includes the internaut�s behavior, the utilities of the ISP and of the content providers, and the monetary flow that involves the internauts, the ISP and content provider, and in particular, the content provider�s revenues from advertisements. We consider various game models and study the resulting equilibrium; they are all combinations of a noncooperative game (in which the service and content providers determine how much they will charge the internauts) with a cooperative one - the content provider and the service provider bargain with each other over payments to one another. We include in our model a possible asymmetric bargaining power which is represented by a parameter (that varies between zero to one). We then extend our model to study the case of several content providers. We also provide a very brief study of the equilibria that arise when one of the content providers enters into an exclusive contract with the ISP.
Resumo:
A single source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the symbols received at their incoming edges on their outgoing edges. In this work, we introduce network-error correction for single source, acyclic, unit-delay, memory-free networks with coherent network coding for multicast. A convolutional code is designed at the source based on the network code in order to correct network- errors that correspond to any of a given set of error patterns, as long as consecutive errors are separated by a certain interval which depends on the convolutional code selected. Bounds on this interval and the field size required for constructing the convolutional code with the required free distance are also obtained. We illustrate the performance of convolutional network error correcting codes (CNECCs) designed for the unit-delay networks using simulations of CNECCs on an example network under a probabilistic error model.
Resumo:
A single-source network is said to be memory-free if all of the internal nodes (those except the source and the sinks) do not employ memory but merely send linear combinations of the incoming symbols (received at their incoming edges) on their outgoing edges. Memory-free networks with delay using network coding are forced to do inter-generation network coding, as a result of which the problem of some or all sinks requiring a large amount of memory for decoding is faced. In this work, we address this problem by utilizing memory elements at the internal nodes of the network also, which results in the reduction of the number of memory elements used at the sinks. We give an algorithm which employs memory at all the nodes of the network to achieve single- generation network coding. For fixed latency, our algorithm reduces the total number of memory elements used in the network to achieve single- generation network coding. We also discuss the advantages of employing single-generation network coding together with convolutional network-error correction codes (CNECCs) for networks with unit- delay and illustrate the performance gain of CNECCs by using memory at the intermediate nodes using simulations on an example network under a probabilistic network error model.
Resumo:
This paper presents the capability of the neural networks as a computational tool for solving constrained optimization problem, arising in routing algorithms for the present day communication networks. The application of neural networks in the optimum routing problem, in case of packet switched computer networks, where the goal is to minimize the average delays in the communication have been addressed. The effectiveness of neural network is shown by the results of simulation of a neural design to solve the shortest path problem. Simulation model of neural network is shown to be utilized in an optimum routing algorithm known as flow deviation algorithm. It is also shown that the model will enable the routing algorithm to be implemented in real time and also to be adaptive to changes in link costs and network topology. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper presents a prototype of a fuzzy system for alleviation of network overloads in the day-to-day operation of power systems. The control used for overload alleviation is real power generation rescheduling. Generation Shift Sensitivity Factors (GSSF) are computed accurately, using a more realistic operational load flow model. Overloading of lines and sensitivity of controlling variables are translated into fuzzy set notations to formulate the relation between overloading of line and controlling ability of generation scheduling. A fuzzy rule based system is formed to select the controllers, their movement direction and step size. Overall sensitivity of line loading to each of the generation is also considered in selecting the controller. Results obtained for network overload alleviation of two modified Indian power networks of 24 bus and 82 bus with line outage contingencies are presented for illustration purposes.
Resumo:
The prevalent virtualization technologies provide QoS support within the software layers of the virtual machine monitor(VMM) or the operating system of the virtual machine(VM). The QoS features are mostly provided as extensions to the existing software used for accessing the I/O device because of which the applications sharing the I/O device experience loss of performance due to crosstalk effects or usable bandwidth. In this paper we examine the NIC sharing effects across VMs on a Xen virtualized server and present an alternate paradigm that improves the shared bandwidth and reduces the crosstalk effect on the VMs. We implement the proposed hardwaresoftware changes in a layered queuing network (LQN) model and use simulation techniques to evaluate the architecture. We find that simple changes in the device architecture and associated system software lead to application throughput improvement of up to 60%. The architecture also enables finer QoS controls at device level and increases the scalability of device sharing across multiple virtual machines. We find that the performance improvement derived using LQN model is comparable to that reported by similar but real implementations.
Resumo:
The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.
Resumo:
Network processors today consist of multiple parallel processors (micro engines) with support for multiple threads to exploit packet level parallelism inherent in network workloads. With such concurrency, packet ordering at the output of the network processor cannot be guaranteed. This paper studies the effect of concurrency in network processors on packet ordering. We use a validated Petri net model of a commercial network processor, Intel IXP 2400, to determine the extent of packet reordering for IPv4 forwarding application. Our study indicates that in addition to the parallel processing in the network processor, the allocation scheme for the transmit buffer also adversely impacts packet ordering. In particular, our results reveal that these packet reordering results in a packet retransmission rate of up to 61%. We explore different transmit buffer allocation schemes namely, contiguous, strided, local, and global which reduces the packet retransmission to 24%. We propose an alternative scheme, packet sort, which guarantees complete packet ordering while achieving a throughput of 2.5 Gbps. Further, packet sort outperforms the in-built packet ordering schemes in the IXP processor by up to 35%.
Resumo:
We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Thetaopt bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form dopt(Pmacrt) x Thetaopt with dopt scaling as Pmacrt 1 /eta, where Pmacrt is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then pro- - vide a simple characterisation of the optimal operating point.