947 resultados para 13078-029


Relevância:

10.00% 10.00%

Publicador:

Resumo:

TCP performance degrades when end-to-end connections extend over wireless connections-links which are characterized by high bit error rate and intermittent connectivity. Such link characteristics can significantly degrade TCP performance as the TCP sender assumes wireless losses to be congestion losses resulting in unnecessary congestion control actions. Link errors can be reduced by increasing transmission power, code redundancy (FEC) or number of retransmissions (ARQ). But increasing power costs resources, increasing code redundancy reduces available channel bandwidth and increasing persistency increases end-to-end delay. The paper proposes a TCP optimization through proper tuning of power management, FEC and ARQ in wireless environments (WLAN and WWAN). In particular, we conduct analytical and numerical analysis taking into "wireless-aware" TCP) performance under different settings. Our results show that increasing power, redundancy and/or retransmission levels always improves TCP performance by reducing link-layer losses. However, such improvements are often associated with cost and arbitrary improvement cannot be realized without paying a lot in return. It is therefore important to consider some kind of net utility function that should be optimized, thus maximizing throughput at the least possible cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Routing protocols in wireless sensor networks (WSN) face two main challenges: first, the challenging environments in which WSNs are deployed negatively affect the quality of the routing process. Therefore, routing protocols for WSNs should recognize and react to node failures and packet losses. Second, sensor nodes are battery-powered, which makes power a scarce resource. Routing protocols should optimize power consumption to prolong the lifetime of the WSN. In this paper, we present a new adaptive routing protocol for WSNs, we call it M^2RC. M^2RC has two phases: mesh establishment phase and data forwarding phase. In the first phase, M^2RC establishes the routing state to enable multipath data forwarding. In the second phase, M^2RC forwards data packets from the source to the sink. Targeting hop-by-hop reliability, an M^2RC forwarding node waits for an acknowledgement (ACK) that its packets were correctly received at the next neighbor. Based on this feedback, an M^2RC node applies multiplicative-increase/additive-decrease (MIAD) to control the number of neighbors targeted by its packet broadcast. We simulated M^2RC in the ns-2 simulator and compared it to GRAB, Max-power, and Min-power routing schemes. Our simulations show that M^2RC achieves the highest throughput with at least 10-30% less consumed power per delivered report in scenarios where a certain number of nodes unexpectedly fail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed hash tables have recently become a useful building block for a variety of distributed applications. However, current schemes based upon consistent hashing require both considerable implementation complexity and substantial storage overhead to achieve desired load balancing goals. We argue in this paper that these goals can b e achieved more simply and more cost-effectively. First, we suggest the direct application of the "power of two choices" paradigm, whereby an item is stored at the less loaded of two (or more) random alternatives. We then consider how associating a small constant number of hash values with a key can naturally b e extended to support other load balancing methods, including load-stealing or load-shedding schemes, as well as providing natural fault-tolerance mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system while retaining sufficient information about it to carry out future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis. The compositional analysis is based on a strongly-typed Domain-Specific Language (DSL) for describing and reasoning about constrained-flow networks at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity. In a companion paper [6], we overview NetSketch, highlight its salient features, and illustrate how it could be used in two applications: the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particle filtering is a popular method used in systems for tracking human body pose in video. One key difficulty in using particle filtering is caused by the curse of dimensionality: generally a very large number of particles is required to adequately approximate the underlying pose distribution in a high-dimensional state space. Although the number of degrees of freedom in the human body is quite large, in reality, the subset of allowable configurations in state space is generally restricted by human biomechanics, and the trajectories in this allowable subspace tend to be smooth. Therefore, a framework is proposed to learn a low-dimensional representation of the high-dimensional human poses state space. This mapping can be learned using a Gaussian Process Latent Variable Model (GPLVM) framework. One important advantage of the GPLVM framework is that both the mapping to, and mapping from the embedded space are smooth; this facilitates sampling in the low-dimensional space, and samples generated in the low-dimensional embedded space are easily mapped back into the original highdimensional space. Moreover, human body poses that are similar in the original space tend to be mapped close to each other in the embedded space; this property can be exploited when sampling in the embedded space. The proposed framework is tested in tracking 2D human body pose using a Scaled Prismatic Model. Experiments on real life video sequences demonstrate the strength of the approach. In comparison with the Multiple Hypothesis Tracking and the standard Condensation algorithm, the proposed algorithm is able to maintain tracking reliably throughout the long test sequences. It also handles singularity and self occlusion robustly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the framework of iBench research project, our previous work created a domain specific language TRAFFIC [6] that facilitates specification, programming, and maintenance of distributed applications over a network. It allows safety property to be formalized in terms of types and subtyping relations. Extending upon our previous work, we add Hindley-Milner style polymorphism [8] with constraints [9] to the type system of TRAFFIC. This allows a programmer to use for-all quantifier to describe types of network components, escalating power and expressiveness of types to a new level that was not possible before with propositional subtyping relations. Furthermore, we design our type system with a pluggable constraint system, so it can adapt to different application needs while maintaining soundness. In this paper, we show the soundness of the type system, which is not syntax-directed but is easier to do typing derivation. We show that there is an equivalent syntax-directed type system, which is what a type checker program would implement to verify the safety of a network flow. This is followed by discussion on several constraint systems: polymorphism with subtyping constraints, Linear Programming, and Constraint Handling Rules (CHR) [3]. Finally, we provide some examples to illustrate workings of these constraint systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An extension to the Boundary Contour System model is proposed to account for boundary completion through vertices with arbitrary numbers of orientations, in a manner consistent with psychophysical observartions, by way of harmonic resonance in a neural architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes a neural network model, called the VITEWRITE model, for generating handwriting movements. The model consists of a sequential controller, or motor program, that interacts with a trajectory generator to move a. hand with redundant degrees of freedom. The neural trajectory generator is the Vector Integration to Endpoint (VITE) model for synchronous variable-speed control of multijoint movements. VITE properties enable a simple control strategy to generate complex handwritten script if the hand model contains redundant degrees of freedom. The proposed controller launches transient directional commands to independent hand synergies at times when the hand begins to move, or when a velocity peak in a given synergy is achieved. The VITE model translates these temporally disjoint synergy commands into smooth curvilinear trajectories among temporally overlapping synergetic movements. The separate "score" of onset times used in most prior models is hereby replaced by a self-scaling activity-released "motor program" that uses few memory resources, enables each synergy to exhibit a unimodal velocity profile during any stroke, generates letters that are invariant under speed and size rescaling, and enables effortless. connection of letter shapes into words. Speed and size rescaling are achieved by scalar GO and GRO signals that express computationally simple volitional commands. Psychophysical data concerning band movements, such as the isochrony principle, asymmetric velocity profiles, and the two-thirds power law relating movement curvature and velocity arise as emergent properties of model interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lacticin 3147, enterocin AS-48, lacticin 481, variacin, and sakacin P are bacteriocins offering promising perspectives in terms of preservation and shelf-life extension of food products and should find commercial application in the near future. The studies detailing their characterization and bio-preservative applications are reviewed. Transcriptomic analyses showed a cell wall-targeted response of Lactococcus lactis IL1403 during the early stages of infection with the lytic bacteriophage c2, which is probably orchestrated by a number of membrane stress proteins and involves D-alanylation of membrane lipoteichoic acids, restoration of the physiological proton motive force disrupted following bacteriophage infection, and energy conservation. Sequencing of the eight plasmids of L. lactis subsp. cremoris DPC3758 from raw milk cheese revealed three anti-phage restriction/modification (R/M) systems, immunity/resistance to nisin, lacticin 481, cadmium and copper, and six conjugative/mobilization regions. A food-grade derivative strain with enhanced bacteriophage resistance was generated via stacking of R/M plasmids. Sequencing and functional analysis of the four plasmids of L. lactis subsp. lactis biovar. diacetylactis DPC3901 from raw milk cheese revealed genes novel to Lactococcus and typical of bacteria associated with plants, in addition to genes associated with plant-derived lactococcal strains. The functionality of a novel high-affinity regulated system for cobalt uptake was demonstrated. The bacteriophage resistant and bacteriocin-producing plasmid pMRC01 places a metabolic burden on lactococcal hosts resulting in lowered growth rates and increased cell permeability and autolysis. The magnitude of these effects is strain dependent but not related to bacteriocin production. Starters’ acidification capacity is not significantly affected. Transcriptomic analyses showed that pMRC01 abortive infection (Abi) system is probably subjected to a complex regulatory control by Rgg-like ORF51 and CopG-like ORF58 proteins. These regulators are suggested to modulate the activity of the putative Abi effectors ORF50 and ORF49 exhibiting topology and functional similarities to the Rex system aborting bacteriophage λ lytic growth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2015 Published by Elsevier B.V.Tree growth resources and the efficiency of resource-use for biomass production determine the productivity of forest ecosystems. In nutrient-limited forests, nitrogen (N)-fertilization increases foliage [N], which may increase photosynthetic rates, leaf area index (L), and thus light interception (IC). The product of such changes is a higher gross primary production and higher net primary production (NPP). However, fertilization may also alter carbohydrate partitioning from below- to aboveground, increasing aboveground NPP (ANPP). We analyzed effects of long-term N-fertilization on NPP, and that of long-term carbon storing organs (NPPS) in a Pinus sylvestris forest on sandy soil, a wide-ranging forest type in the boreal region. We based our analyses on a combination of destructive harvesting, consecutive mensuration, and optical measurements of canopy openness. After eight-year fertilization with a total of 70gNm-2, ANPP was 27±7% higher in the fertilized (F) relative to the reference (R) stand, but although L increased relative to its pre-fertilization values, IC was not greater than in R. On the seventh year after the treatment initiation, the increase of ANPP was matched by the decrease of belowground NPP (78 vs. 92gCm-2yr-1; ~17% of NPP) and, given the similarity of IC, suggests that the main effect of N-fertilization was changed carbon partitioning rather than increased canopy photosynthesis. Annual NPPS increased linearly with growing season temperature (T) in both treatments, with an upward shift of 70.2gCm-2yr-1 by fertilization, which also caused greater amount of unexplained variation (r2=0.53 in R, 0.21 in F). Residuals of the NPPS-T relationship of F were related to growing season precipitation (P, r2=0.48), indicating that T constrains productivity at this site regardless of fertility, while P is important in determining productivity where N-limitation is alleviated. We estimated that, in a growing season average T (11.5±1.0°C; 33-year-mean), NPPS response to N-fertilization will be nullified with P 31mm less than the mean (325±85mm), and would double with P 109mm greater than the mean. These results suggest that inter-annual variation in climate, particularly in P, may help explaining the reported large variability in growth responses to fertilization of pine stands on sandy soils. Furthermore, forest management of long-rotation systems, such as those of boreal and northern temperate forests, must consider the efficiency of fertilization in terms of wood production in the context of changes in climate predicted for the region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider the problem of providing flexibility to solutions of two-machine shop scheduling problems. We use the concept of group-scheduling to characterize a whole set of schedules so as to provide more choice to the decision-maker at any decision point. A group-schedule is a sequence of groups of permutable operations defined on each machine where each group is such that any permutation of the operations inside the group leads to a feasible schedule. Flexibility of a solution and its makespan are often conflicting, thus we search for a compromise between a low number of groups and a small value of makespan. We resolve the complexity status of the relevant problems for the two-machine flow shop, job shop and open shop. A number of approximation algorithms are developed and their worst-case performance is analyzed. For the flow shop, an effective heuristic algorithm is proposed and the results of computational experiments are reported.