958 resultados para Network loss
Resumo:
The heterogeneity and open nature of network systems make analysis of compositions of components quite challenging, making the design and implementation of robust network services largely inaccessible to the average programmer. We propose the development of a novel type system and practical type spaces which reflect simplified representations of the results and conclusions which can be derived from complex compositional theories in more accessible ways, essentially allowing the system architect or programmer to be exposed only to the inputs and output of compositional analysis without having to be familiar with the ins and outs of its internals. Toward this end we present the TRAFFIC (Typed Representation and Analysis of Flows For Interoperability Checks) framework, a simple flow-composition and typing language with corresponding type system. We then discuss and demonstrate the expressive power of a type space for TRAFFIC derived from the network calculus, allowing us to reason about and infer such properties as data arrival, transit, and loss rates in large composite network applications.
Resumo:
Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.
Resumo:
This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.
Resumo:
Mesenchymal stem cells (MSCs) and endothelial progenitor cells (EPCs) represent promising cell sources for angiogenic therapies. There are, however, conflicting reports regarding the ability of MSCs to support network formation of endothelial cells. The goal of this study was to assess the ability of human bone marrow-derived MSCs to support network formation of endothelial outgrowth cells (EOCs) derived from umbilical cord blood EPCs. We hypothesized that upon in vitro coculture, MSCs and EOCs promote a microenvironment conducive for EOC network formation without the addition of angiogenic growth supplements. EOC networks formed by coculture with MSCs underwent regression and cell loss by day 10 with a near 4-fold and 2-fold reduction in branch points and mean segment length, respectively, in comparison with networks formed by coculture vascular smooth muscle cell (SMC) cocultures. EOC network regression in MSC cocultures was not caused by lack of vascular endothelial growth factor (VEGF)-A or changes in TGF-β1 or Ang-2 supernatant concentrations in comparison with SMC cocultures. Removal of CD45+ cells from MSCs improved EOC network formation through a 2-fold increase in total segment length and number of branch points in comparison to unsorted MSCs by day 6. These improvements, however, were not sustained by day 10. CD45 expression in MSC cocultures correlated with EOC network regression with a 5-fold increase between day 6 and day 10 of culture. The addition of supplemental growth factors VEGF, fibroblastic growth factor-2, EGF, hydrocortisone, insulin growth factor-1, ascorbic acid, and heparin to MSC cocultures promoted stable EOC network formation over 2 weeks in vitro, without affecting CD45 expression, as evidenced by a lack of significant differences in total segment length (p=0.96). These findings demonstrate the ability of MSCs to support EOC network formation correlates with removal of CD45+ cells and improves upon the addition of soluble growth factors.
Resumo:
In this paper, we have considered the problem of selection of available repertoires. With Ab2 as immunogens, we have used the idiotypic cascade to explore potential repertoires. Our results suggest that potential idiotypic repertoires are more or less the same within a species or between different species. A given idiotype "à la Oudin" can become a recurrent one within the same outbred species or within different species. Similarly, an intrastrain crossreactive idiotype can be induced in other strains, even though there is a genetic disparity between these strains. The structural basis of this phenomenon has been explored. We next examined results showing the loss and gain of recurrent idiotypes without any intentional idiotypic manipulation. A recurrent idiotype can be lost in a syngeneic transfer and a private one can become recurrent by changing the genetic background. The change of available idiotypic repertoires at the B cell level has profound influences on the idiotypic repertoires of suppressor T cells. All these results imply that idiotypic games are played by the immune system itself, a strong suggestion that the immune system is a functional idiotypic network.
Resumo:
This paper presents a new method for transmission loss allocation. The method is based on tracing the complex power flow through the network and determining the share of each load on the flow and losses through each line. Transmission losses are taken into consideration during power flow tracing. Unbundling line losses is carried out using an equation, which has a physical basis, and considers the coupling between active and reactive power flows as well as the cross effects of active and reactive power on active and reactive losses. A tracing algorithm which can be considered direct to a good extent, as there is no need for exhaustive search to determine the flow paths as these are determined in a systematic way during the course of tracing. Results of application of the proposed method are also presented.
Resumo:
The future convergence of voice, video and data applications on the Internet requires that next generation technology provides bandwidth and delay guarantees. Current technology trends are moving towards scalable aggregate-based systems where applications are grouped together and guarantees are provided at the aggregate level only. This solution alone is not enough for interactive video applications with sub-second delay bounds. This paper introduces a novel packet marking scheme that controls the end-to-end delay of an individual flow as it traverses a network enabled to supply aggregate- granularity Quality of Service (QoS). IPv6 Hop-by-Hop extension header fields are used to track the packet delay encountered at each network node and autonomous decisions are made on the best queuing strategy to employ. The results of network simulations are presented and it is shown that when the proposed mechanism is employed the requested delay bound is met with a 20% reduction in resource reservation and no packet loss in the network.
Resumo:
PURPOSE. This study was conducted to evaluate whether regions of the retinal neuropile become hypoxic during periods of high oxygen consumption and whether depletion of the outer retina reduces hypoxia and related changes in gene expression.
METHODS. Retinas from rhodopsin knockout (Rho(-/-)) mice were evaluated along with those of wild-type (WT) control animals. Retinas were also examined at the end of 12-hour dark or light periods, and a separate group was treated with L-cis-diltiazem at the beginning of a 12-hour dark period. Hypoxia was assessed by deposition of hypoxyprobe (HP) and HP-protein adducts were localized by immunohistochemistry and quantified using ELISA. Also, hypoxia-regulated gene expression and transcriptional activity were assessed alongside vascular density.
RESULTS. Hypoxia was observed in the inner nuclear and ganglion cell layers in WT retina and was significantly reduced in Rho (-/-) mice (P < 0.05). Retinal hypoxia was significantly increased during dark adaptation in WT mice (P < 0.05), whereas no change was observed in Rho(-/-) or with L-cis-diltiazem-treated WT mice. Hypoxia-inducible factor (HIF)-1 alpha DNA-binding and VEGF mRNA expression in Rho(-/-) retina was significantly reduced in unison with outer retinal depletion (P < 0.05). Retina from the Rho(-/-) mice displayed an extensive intraretinal vascular network after 6 months, although there was evidence that capillary density was depleted in comparison with that in WT retinas.
CONCLUSIONS. Relative hypoxia occurs in the inner retina especially during dark adaptation. Photoreceptor loss reduces retinal oxygen usage and hypoxia which corresponds with attenuation of the retinal microvasculature. These studies suggest that in normal physiological conditions and diurnal cycles the adult retina exists in a state of borderline hypoxia, making this tissue particularly susceptible to even subtle reductions in perfusion.
Resumo:
This paper examines the ability of the doubly fed induction generator (DFIG) to deliver multiple reactive power objectives during variable wind conditions. The reactive power requirement is decomposed based on various control objectives (e.g. power factor control, voltage control, loss minimisation, and flicker mitigation) defined around different time frames (i.e. seconds, minutes, and hourly), and the control reference is generated by aggregating the individual reactive power requirement for each control strategy. A novel coordinated controller is implemented for the rotor-side converter and the grid-side converter considering their capability curves and illustrating that it can effectively utilise the aggregated DFIG reactive power capability for system performance enhancement. The performance of the multi-objective strategy is examined for a range of wind and network conditions, and it is shown that for the majority of the scenarios, more than 92% of the main control objective can be achieved while introducing the integrated flicker control scheme with the main reactive power control scheme. Therefore, optimal control coordination across the different control strategies can maximise the availability of ancillary services from DFIG-based wind farms without additional dynamic reactive power devices being installed in power networks.
Resumo:
This paper discusses methods of using the Internet as a communications media between distributed generator sites to provide new forms of loss-of-mains protection. An analysis of the quality of the communications channels between several nodes on the network was carried out experimentally. It is shown that Internet connections in urban environments are already capable of providing real-time power system protection, whilst rural Internet connections are borderline suitable but could not yet be recommended as a primary method of protection. Two strategies of providing loss-of-mains across Internet protocol are considered, broadcast of a reference frequency or phasor representing the utility and an Internet based inter-tripping scheme.
Resumo:
This study presents a new method for determining the transmission network usage by loads and generators, which can then be used for transmission cost/loss allocation in an explainable and justifiable manner. The proposed method is based on solid physical grounds and circuit theory. It relies on dividing the currents through the network into two components; the first one is attributed to power flows from generators to loads, whereas the second one is because of the generators only. Unlike almost all the available methods, the proposed method is assumption free and hence it is more accurate than similar methods even those having some physical basis. The proposed method is validated through a transformer analogy, and theoretical derivations. The method is verified through application to the IEEE 30 bus system and the IEEE 118 test system. The results obtained verified many desirable features of the proposed method. Being more accurate in determining the network usage, in an explainable transparent manner, and in giving accurate cost signals, indicating the best locations to add loads and generation, are among the many desirable features.
Resumo:
Loss-of-mains protection is an important component of the protection systems of embedded generation. The role of loss-of-mains is to disconnect the embedded generator from the utility grid in the event that connection to utility dispatched generation is lost. This is necessary for a number of reasons, including the safety of personnel during fault restoration and the protection of plant against out-of-synchronism reclosure to the mains supply. The incumbent methods of loss-of-mains protection were designed when the installed capacity of embedded generation was low, and known problems with nuisance tripping of the devices were considered acceptable because of the insignificant consequence to system operation. With the dramatic increase in the installed capacity of embedded generation over the last decade, the limitations of current islanding detection methods are no longer acceptable. This study describes a new method of loss-of-mains protection based on phasor measurement unit (PMU) technology, specifically using a low cost PMU device of the authors' design which has been developed for distribution network applications. The proposed method addresses the limitations of the incumbent methods, providing a solution that is free of nuisance tripping and has a zero non-detection zone. This system has been tested experimentally and is shown to be practical, feasible and effective. Threshold settings for the new method are recommended based on data acquired from both the Great Britain and Ireland power systems.
Resumo:
Recent research in industrial organisation has investigated the essential place that middlemen have in the networks that make up our global economy. In this paper we attempt to understand how such middlemen compete with each other through a game theoretic analysis using novel techniques from decision-making under ambiguity.
We model a purposely abstract and reduced model of one middleman who provides a two-sided platform, mediating surplus-creating interactions between two users. The middleman evaluates uncertain outcomes under positional ambiguity, taking into account the possibility of the emergence of an alternative middleman offering intermediary services to the two users.
Surprisingly, we find many situations in which the middleman will purposely extract maximal gains from her position. Only if there is relatively low probability of devastating loss of business under competition, the middleman will adopt a more competitive attitude and extract less from her position.
Resumo:
Increasingly invasive bladder cancer cells lines displayed insensitivity toward a panel of dietary-derived ligands for members of the nuclear receptor superfamily. Insensitivity was defined through altered gene regulatory actions and cell proliferation and reflected both reduced receptor expression and elevated nuclear receptor corepressor 1 (NCOR1) expression. Stable overexpression of NCOR1 in sensitive cells (RT4) resulted in a panel of clones that recapitulated the resistant phenotype in terms of gene regulatory actions and proliferative responses toward ligand. Similarly, silencing RNA approaches to NCOR1 in resistant cells (EJ28) enhanced ligand gene regulatory and proliferation responses, including those mediated by peroxisome proliferator-activated receptor (PPAR) gamma and vitamin D receptor (VDR) receptors. Elevated NCOR1 levels generate an epigenetic lesion to target in resistant cells using the histone deacetylase inhibitor vorinostat, in combination with nuclear receptor ligands. Such treatments revealed strong-additive interactions toward the PPARgamma, VDR and Farnesoid X-activated receptors. Genome-wide microarray and microfluidic quantitative real-time, reverse transcription-polymerase chain reaction approaches, following the targeting of NCOR1 activity and expression, revealed the selective capacity of this corepressor to govern common transcriptional events of underlying networks. Combined these findings suggest that NCOR1 is a selective regulator of nuclear receptors, notably PPARgamma and VDR, and contributes to their loss of sensitivity. Combinations of epigenetic therapies that target NCOR1 may prove effective, even when receptor expression is reduced.
Resumo:
IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection