838 resultados para end-to-side


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The tube diameter in the reptation model is the distance between a given chain segment and its nearest segment in adjacent chains. This dimension is thus related to the cross-sectional area of polymer chains and the nearest approach among chains, without effects of thermal fluctuation and steric repulsion. Prior calculated tube diameters are much larger, about 5 times, than the actual chain cross-sectional areas. This is ascribed to the local freedom required for mutual rearrangement among neighboring chain segments. This tube diameter concept seems to us to infer a relationship to the corresponding entanglement spacing. Indeed, we report here that the critical molecular weight, M(c), for the onset of entanglements is found to be M(c) = 28 A/([R2]0/M), where A is the chain cross-sectional area and [R2]0 the mean-square end-to-end distance of a freely jointed chain of molecular weight M. The new, computed relationship between the critical number of backbone atoms for entanglement and the chain cross-sectional area of polymers, N(c) = A0,44, is concordant with the cross-sectional area of polymer chains being the parameter controlling the critical entanglement number of backbone atoms of flexible polymers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

介绍了一种新型的可重构模块化星球探测机器人系统,详细分析了子机器人组合的三种基本构型:手臂在前,首尾相连成串状;手臂在后,首尾相连成串状;两个子机器人首尾相连成环状。以两个子机器人为例,对爬坡过程中子机器人系统的各种组合形式进行了静力学分析,在此基础上,对机器人组合的爬坡能力进行了仿真研究,结果表明子机器人组合的爬坡能力与其连接构型紧密相关。实际试验很好地验证了仿真结果,同时,以实际试验为基础,得出了如下结论:在爬坡过程中,手臂在前的串状连接的机器人组合的运动稳定性较差,首尾相连的环状连接的机器人组合是子机器人组合爬坡的首选构型。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gakkel Ridge in Arctic Ocean is the ulstraslow spreading ridge in the world with a full spreading rate decreasing from 14 mm/yr in the western end to 7mm/yr in the eastern end. To study the histories of partial melting and melt referilization occurred in the oceanic mantle beneath Gakkel Ridge, both extremely fresh and altered abyssal peridotites from two dredge hauls (PS66-238 and HLY0102-D70) have been selected in this research. Major and trace element data of the residual minerals suggest that all samples have been refertilized by late enriched melts after low to moderate degrees (3-12%) of partial melting in the stability field of spinel, whereas some samples also inherited signatures of partial melting in stability field of garnet. Os isotopic compositions of Gakkel samples have not been significantly affected by late processes, e.g., seawater alteration and melt refertilzaiton. Samples from both dredge hauls have similar range of 187Os/188Os, from strongly unradiogenic (~0.114) in the harzburgites to approximating the inferred values of PUM (primitive upper mantle) in some lherzolites (~0.129). Inherited ancient depletion events in the harzburgites with Re-depletion age up to 2 billion years are unrelevant to the recent genesis of MORB (mid-ocean ridge basalts) beneath Gakkel Ridge. Comparisons of highly siderophile elements (HSEs) between the fresh and altered samples suggested both Pd and Re were affected and thus are mobile during seawater alteration, whereas the other HSEs (i.e., Os, Ir, Ru an Pt) are stable. The fractionated HSEs patterns in the harzburgites suggest both PPGEs (Pt and Pd) and Re can be fractionated from IPGEs (Os, Ir and Ru) at low degree of partial melting, which might be due to physical dredging of sulfide melts by silicate melts rather than equilibrium partitioning between residues and silicate melts. Inferred HSEs budget of the PUM confirm the previous study that both Ru/Ir and Pd/Ir are suprachondritic in the PUM. Some modifications of late-veneer hypothesis are required in light of the unique PUM composition. HSEs and Os isotopic compositions of Gakkel abyssal peridotites indicate the oceanic mantle is highly heterogeneous within a scale of one dredge haul (<5 km). Both depleted and fertile mantle domains are likely to be mechanically juxtaposed in the asthenosphere in a state of ‘plum pudding’. Widely distribution of ancient depleted components in the asthenosphere suggests that DMM (depleted MORB mantle) should not be synonymous with the MORB source. The later is just the fertile part of the former, i.e., the depleted components in the DMM do not or contribute little to the genesis of MORB.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As the size of digital systems increases, the mean time between single component failures diminishes. To avoid component related failures, large computers must be fault-tolerant. In this paper, we focus on methods for achieving a high degree of fault-tolerance in multistage routing networks. We describe a multipath scheme for providing end-to-end fault-tolerance on large networks. The scheme improves routing performance while keeping network latency low. We also describe the novel routing component, RN1, which implements this scheme, showing how it can be the basic building block for fault-tolerant multistage routing networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This technical report describes a new protocol, the Unique Token Protocol, for reliable message communication. This protocol eliminates the need for end-to-end acknowledgments and minimizes the communication effort when no dynamic errors occur. Various properties of end-to-end protocols are presented. The unique token protocol solves the associated problems. It eliminates source buffering by maintaining in the network at least two copies of a message. A token is used to decide if a message was delivered to the destination exactly once. This technical report also presents a possible implementation of the protocol in a worm-hole routed, 3-D mesh network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BackgroundAnterior open bite occurs when there is a lack of vertical overlap of the upper and lower incisors. the aetiology is multifactorial including: oral habits, unfavourable growth patterns, enlarged lymphatic tissue with mouth breathing. Several treatments have been proposed to correct this malocclusion, but interventions are not supported by strong scientific evidence.ObjectivesThe aim of this systematic review was to evaluate orthodontic and orthopaedic treatments to correct anterior open bite in children.Search methodsThe following databases were searched: the Cochrane Oral Health Group's Trials Register (to 14 February 2014); the Cochrane Central Register of Controlled Trials (CENTRAL)(The Cochrane Library 2014, Issue 1); MEDLINE via OVID (1946 to 14 February 2014); EMBASE via OVID (1980 to 14 February 2014); LILACS via BIREME Virtual Health Library (1982 to 14 February 2014); BBO via BIREME Virtual Health Library (1980 to 14 February 2014); and SciELO (1997 to 14 February 2014). We searched for ongoing trials via ClinicalTrials.gov (to 14 February 2014). Chinese journals were handsearched and the bibliographies of papers were retrieved.Selection criteriaAll randomised or quasi-randomised controlled trials of orthodontic or orthopaedic treatments or both to correct anterior open bite in children.Data collection and analysisTwo review authors independently assessed the eligibility of all reports identified.Risk ratios (RRs) and corresponding 95% confidence intervals (CIs) were calculated for dichotomous data. the continuous data were expressed as described by the author.Main resultsThree randomised controlled trials were included comparing: effects of Frankel's function regulator-4 (FR-4) with lip-seal training versus no treatment; repelling-magnet splints versus bite-blocks; and palatal crib associated with high-pull chincup versus no treatment.The study comparing repelling-magnet splints versus bite-blocks could not be analysed because the authors interrupted the treatment earlier than planned due to side effects in four of ten patients.FR-4 associated with lip-seal training (RR = 0.02 (95% CI 0.00 to 0.38)) and removable palatal crib associated with high-pull chincup (RR = 0.23 (95% CI 0.11 to 0.48)) were able to correct anterior open bite.No study described: randomisation process, sample size calculation, there was not blinding in the cephalometric analysis and the two studies evaluated two interventions at the same time. These results should be therefore viewed with caution.Authors' conclusionsThere is weak evidence that the interventions FR-4 with lip-seal training and palatal crib associated with high-pull chincup are able to correct anterior open bite. Given that the trials included have potential bias, these results must be viewed with caution. Recommendations for clinical practice cannot be made based only on the results of these trials. More randomised controlled trials are needed to elucidate the interventions for treating anterior open bite.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

TCP performance degrades when end-to-end connections extend over wireless connections-links which are characterized by high bit error rate and intermittent connectivity. Such link characteristics can significantly degrade TCP performance as the TCP sender assumes wireless losses to be congestion losses resulting in unnecessary congestion control actions. Link errors can be reduced by increasing transmission power, code redundancy (FEC) or number of retransmissions (ARQ). But increasing power costs resources, increasing code redundancy reduces available channel bandwidth and increasing persistency increases end-to-end delay. The paper proposes a TCP optimization through proper tuning of power management, FEC and ARQ in wireless environments (WLAN and WWAN). In particular, we conduct analytical and numerical analysis taking into "wireless-aware" TCP) performance under different settings. Our results show that increasing power, redundancy and/or retransmission levels always improves TCP performance by reducing link-layer losses. However, such improvements are often associated with cost and arbitrary improvement cannot be realized without paying a lot in return. It is therefore important to consider some kind of net utility function that should be optimized, thus maximizing throughput at the least possible cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Science of Network Service Composition has clearly emerged as one of the grand themes driving many of our research questions in the networking field today [NeXtworking 2003]. This driving force stems from the rise of sophisticated applications and new networking paradigms. By "service composition" we mean that the performance and correctness properties local to the various constituent components of a service can be readily composed into global (end-to-end) properties without re-analyzing any of the constituent components in isolation, or as part of the whole composite service. The set of laws that would govern such composition is what will constitute that new science of composition. The combined heterogeneity and dynamic open nature of network systems makes composition quite challenging, and thus programming network services has been largely inaccessible to the average user. We identify (and outline) a research agenda in which we aim to develop a specification language that is expressive enough to describe different components of a network service, and that will include type hierarchies inspired by type systems in general programming languages that enable the safe composition of software components. We envision this new science of composition to be built upon several theories (e.g., control theory, game theory, network calculus, percolation theory, economics, queuing theory). In essence, different theories may provide different languages by which certain properties of system components can be expressed and composed into larger systems. We then seek to lift these lower-level specifications to a higher level by abstracting away details that are irrelevant for safe composition at the higher level, thus making theories scalable and useful to the average user. In this paper we focus on services built upon an overlay management architecture, and we use control theory and QoS theory as example theories from which we lift up compositional specifications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development and deployment of distributed network-aware applications and services over the Internet require the ability to compile and maintain a model of the underlying network resources with respect to (one or more) characteristic properties of interest. To be manageable, such models must be compact, and must enable a representation of properties along temporal, spatial, and measurement resolution dimensions. In this paper, we propose a general framework for the construction of such metric-induced models using end-to-end measurements. We instantiate our approach using one such property, packet loss rates, and present an analytical framework for the characterization of Internet loss topologies. From the perspective of a server the loss topology is a logical tree rooted at the server with clients at its leaves, in which edges represent lossy paths between a pair of internal network nodes. We show how end-to-end unicast packet probing techniques could b e used to (1) infer a loss topology and (2) identify the loss rates of links in an existing loss topology. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. We report on simulation, implementation, and Internet deployment results that show the effectiveness of our approach and its robustness in terms of its accuracy and convergence over a wide range of network conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accurate measurement of network bandwidth is crucial for flexible Internet applications and protocols which actively manage and dynamically adapt to changing utilization of network resources. These applications must do so to perform tasks such as distributing and delivering high-bandwidth media, scheduling service requests and performing admission control. Extensive work has focused on two approaches to measuring bandwidth: measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, best-practice techniques for the former are inefficient and techniques for the latter are only able to observe bottlenecks visible at end-to-end scope. In this paper, we develop and simulate end-to-end probing methods which can measure bottleneck bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths shared by a set of flows. As another important contribution, we describe a number of practical applications which we foresee as standing to benefit from solutions to this problem, especially in emerging, flexible network architectures such as overlay networks, ad-hoc networks, peer-to-peer architectures and massively accessed content servers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Growing interest in inference and prediction of network characteristics is justified by its importance for a variety of network-aware applications. One widely adopted strategy to characterize network conditions relies on active, end-to-end probing of the network. Active end-to-end probing techniques differ in (1) the structural composition of the probes they use (e.g., number and size of packets, the destination of various packets, the protocols used, etc.), (2) the entity making the measurements (e.g. sender vs. receiver), and (3) the techniques used to combine measurements in order to infer specific metrics of interest. In this paper, we present Periscope: a Linux API that enables the definition of new probing structures and inference techniques from user space through a flexible interface. PeriScope requires no support from clients beyond the ability to respond to ICMP ECHO REQUESTs and is designed to minimize user/kernel crossings and to ensure various constraints (e.g., back-to-back packet transmissions, fine-grained timing measurements) We show how to use Periscope for two different probing purposes, namely the measurement of shared packet losses between pairs of endpoints and for the measurement of subpath bandwidth. Results from Internet experiments for both of these goals are also presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Personal communication devices are increasingly equipped with sensors for passive monitoring of encounters and surroundings. We envision the emergence of services that enable a community of mobile users carrying such resource-limited devices to query such information at remote locations in the field in which they collectively roam. One approach to implement such a service is directed placement and retrieval (DPR), whereby readings/queries about a specific location are routed to a node responsible for that location. In a mobile, potentially sparse setting, where end-to-end paths are unavailable, DPR is not an attractive solution as it would require the use of delay-tolerant (flooding-based store-carry-forward) routing of both readings and queries, which is inappropriate for applications with data freshness constraints, and which is incompatible with stringent device power/memory constraints. Alternatively, we propose the use of amorphous placement and retrieval (APR), in which routing and field monitoring are integrated through the use of a cache management scheme coupled with an informed exchange of cached samples to diffuse sensory data throughout the network, in such a way that a query answer is likely to be found close to the query origin. We argue that knowledge of the distribution of query targets could be used effectively by an informed cache management policy to maximize the utility of collective storage of all devices. Using a simple analytical model, we show that the use of informed cache management is particularly important when the mobility model results in a non-uniform distribution of users over the field. We present results from extensive simulations which show that in sparsely-connected networks, APR is more cost-effective than DPR, that it provides extra resilience to node failure and packet losses, and that its use of informed cache management yields superior performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and MaxSum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a transport protocol whose goal is to reduce power consumption without compromising delivery requirements of applications. To meet its goal of energy efficiency, our transport protocol (1) contains mechanisms to balance end-to-end vs. local retransmissions; (2) minimizes acknowledgment traffic using receiver regulated rate-based flow control combined with selected acknowledgements and in-network caching of packets; and (3) aggressively seeks to avoid any congestion-based packet loss. Within a recently developed ultra low-power multi-hop wireless network system, extensive simulations and experimental results demonstrate that our transport protocol meets its goal of preserving the energy efficiency of the underlying network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent advances in processor speeds, mobile communications and battery life have enabled computers to evolve from completely wired to completely mobile. In the most extreme case, all nodes are mobile and communication takes place at available opportunities – using both traditional communication infrastructure as well as the mobility of intermediate nodes. These are mobile opportunistic networks. Data communication in such networks is a difficult problem, because of the dynamic underlying topology, the scarcity of network resources and the lack of global information. Establishing end-to-end routes in such networks is usually not feasible. Instead a store-and-carry forwarding paradigm is better suited for such networks. This dissertation describes and analyzes algorithms for forwarding of messages in such networks. In order to design effective forwarding algorithms for mobile opportunistic networks, we start by first building an understanding of the set of all paths between nodes, which represent the available opportunities for any forwarding algorithm. Relying on real measurements, we enumerate paths between nodes and uncover what we refer to as the path explosion effect. The term path explosion refers to the fact that the number of paths between a randomly selected pair of nodes increases exponentially with time. We draw from the theory of epidemics to model and explain the path explosion effect. This is the first contribution of the thesis, and is a key observation that underlies subsequent results. Our second contribution is the study of forwarding algorithms. For this, we rely on trace driven simulations of different algorithms that span a range of design dimensions. We compare the performance (success rate and average delay) of these algorithms. We make the surprising observation that most algorithms we consider have roughly similar performance. We explain this result in light of the path explosion phenomenon. While the performance of most algorithms we studied was roughly the same, these algorithms differed in terms of cost. This prompted us to focus on designing algorithms with the explicit intent of reducing costs. For this, we cast the problem of forwarding as an optimal stopping problem. Our third main contribution is the design of strategies based on optimal stopping principles which we refer to as Delegation schemes. Our analysis shows that using a delegation scheme reduces cost over naive forwarding by a factor of O(√N), where N is the number of nodes in the network. We further validate this result on real traces, where the cost reduction observed is even greater. Our results so far include a key assumption, which is unbounded buffers on nodes. Next, we relax this assumption, so that the problem shifts to one of prioritization of messages for transmission and dropping. Our fourth contribution is the study of message prioritization schemes, combined with forwarding. Our main result is that one achieves higher performance by assigning higher priorities to young messages in the network. We again interpret this result in light of the path explosion effect.