11 resultados para MRI

em Boston University Digital Common


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent measurement based studies reveal that most of the Internet connections are short in terms of the amount of traffic they carry (mice), while a small fraction of the connections are carrying a large portion of the traffic (elephants). A careful study of the TCP protocol shows that without help from an Active Queue Management (AQM) policy, short connections tend to lose to long connections in their competition for bandwidth. This is because short connections do not gain detailed knowledge of the network state, and therefore they are doomed to be less competitive due to the conservative nature of the TCP congestion control algorithm. Inspired by the Differentiated Services (Diffserv) architecture, we propose to give preferential treatment to short connections inside the bottleneck queue, so that short connections experience less packet drop rate than long connections. This is done by employing the RIO (RED with In and Out) queue management policy which uses different drop functions for different classes of traffic. Our simulation results show that: (1) in a highly loaded network, preferential treatment is necessary to provide short TCP connections with better response time and fairness without hurting the performance of long TCP connections; (2) the proposed scheme still delivers packets in FIFO manner at each link, thus it maintains statistical multiplexing gain and does not misorder packets; (3) choosing a smaller default initial timeout value for TCP can help enhance the performance of short TCP flows, however not as effectively as our scheme and at the risk of congestion collapse; (4) in the worst case, our proposal works as well as a regular RED scheme, in terms of response time and goodput.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The congestion control mechanisms of TCP make it vulnerable in an environment where flows with different congestion-sensitivity compete for scarce resources. With the increasing amount of unresponsive UDP traffic in today's Internet, new mechanisms are needed to enforce fairness in the core of the network. We propose a scalable Diffserv-like architecture, where flows with different characteristics are classified into separate service queues at the routers. Such class-based isolation provides protection so that flows with different characteristics do not negatively impact one another. In this study, we examine different aspects of UDP and TCP interaction and possible gains from segregating UDP and TCP into different classes. We also investigate the utility of further segregating TCP flows into two classes, which are class of short and class of long flows. Results are obtained analytically for both Tail-drop and Random Early Drop (RED) routers. Class-based isolation have the following salient features: (1) better fairness, (2) improved predictability for all kinds of flows, (3) lower transmission delay for delay-sensitive flows, and (4) better control over Quality of Service (QoS) of a particular traffic type.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Long-range dependence has been observed in many recent Internet traffic measurements. In addition, some recent studies have shown that under certain network conditions, TCP itself can produce traffic that exhibits dependence over limited timescales, even in the absence of higher-level variability. In this paper, we use a simple Markovian model to argue that when the loss rate is relatively high, TCP's adaptive congestion control mechanism indeed generates traffic with OFF periods exhibiting power-law shape over several timescales and thus introduces pseudo-long-range dependence into the overall traffic. Moreover, we observe that more variable initial retransmission timeout values for different packets introduces more variable packet inter-arrival times, which increases the burstiness of the overall traffic. We can thus explain why a single TCP connection can produce a time-series that can be misidentified as self-similar using standard tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An increasing number of applications, such as distributed interactive simulation, live auctions, distributed games and collaborative systems, require the network to provide a reliable multicast service. This service enables one sender to reliably transmit data to multiple receivers. Reliability is traditionally achieved by having receivers send negative acknowledgments (NACKs) to request from the sender the retransmission of lost (or missing) data packets. However, this Automatic Repeat reQuest (ARQ) approach results in the well-known NACK implosion problem at the sender. Many reliable multicast protocols have been recently proposed to reduce NACK implosion. But, the message overhead due to NACK requests remains significant. Another approach, based on Forward Error Correction (FEC), requires the sender to encode additional redundant information so that a receiver can independently recover from losses. However, due to the lack of feedback from receivers, it is impossible for the sender to determine how much redundancy is needed. In this paper, we propose a new reliable multicast protocol, called ARM for Adaptive Reliable Multicast. Our protocol integrates ARQ and FEC techniques. The objectives of ARM are (1) reduce the message overhead due to NACK requests, (2) reduce the amount of data transmission, and (3) reduce the time it takes for all receivers to receive the data intact (without loss). During data transmission, the sender periodically informs the receivers of the number of packets that are yet to be transmitted. Based on this information, each receiver predicts whether this amount is enough to recover its losses. Only if it is not enough, that the receiver requests the sender to encode additional redundant packets. Using ns simulations, we show the superiority of our hybrid ARQ-FEC protocol over the well-known Scalable Reliable Multicast (SRM) protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given Quality-of-Service (QoS) constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called DCCR (for Delay-Cost-Constrained Routing), is a variant of the k-shortest path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use the method proposed by Blokh and Gutin to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm SSR+DCCR (for Search Space Reduction+DCCR). Through extensive simulations, we confirm that SSR+DCCR performs very well compared to the optimal but very expensive solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SomeCast is a novel paradigm for the reliable multicast of real-time data to a large set of receivers over the Internet. SomeCast is receiver-initiated and thus scalable in the number of receivers, the diverse characteristics of paths between senders and receivers (e.g. maximum bandwidth and round-trip-time), and the dynamic conditions of such paths (e.g. congestion-induced delays and losses). SomeCast enables receivers to dynamically adjust the rate at which they receive multicast information to enable the satisfaction of real-time QoS constraints (e.g. rate, deadlines, or jitter). This is done by enabling a receiver to join SOME number of concurrent multiCAST sessions, whereby each session delivers a portion of an encoding of the real-time data. By adjusting the number of such sessions dynamically, client-specific QoS constraints can be met independently. The SomeCast paradigm can be thought of as a generalization of the AnyCast (e.g. Dynamic Server Selection) and ManyCast (e.g. Digital Fountain) paradigms, which have been proposed in the literature to address issues of scalability of UniCast and MultiCast environments, respectively. In this paper we overview the SomeCast paradigm, describe an instance of a SomeCast protocol, and present simulation results that quantify the significant advantages gained from adopting such a protocol for the reliable multicast of data to a diverse set of receivers subject to real-time QoS constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent empirical studies have shown that Internet topologies exhibit power laws of the form for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs y = x^α within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely diameter, average path length and clustering coefficient. Different kinds of network topologies are generated: (T1) topologies generated using our parametrized generator, we call BRITE; (T2) random topologies generated using the well-known Waxman model; (T3) Transit-Stub topologies generated using GT-ITM tool; and (T4) regular grid topologies. We observe that some generated topologies may not obey power laws P1 and P2. Thus, the existence of these power laws can be used to validate the accuracy of a given tool in generating representative Internet topologies. Power laws P3 and P4 were observed in nearly all considered topologies, but different topologies showed different values of the power exponent α. Thus, while the presence of power laws P3 and P4 do not give strong evidence for the representativeness of a generated topology, the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology. We also find that factors F1 and F2 are the key contributors in our study which provide the resemblance of our generated topologies to that of the Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this position paper, we review basic control strategies that machines acting as "traffic controllers" could deploy in order to improve the management of Internet services. Such traffic controllers are likely to spur the widespread emergence of advanced applications, which have (so far) been hindered by the inability of the networking infrastructure to deliver on the promise of Quality-of-Service (QoS).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A neuroanatomical parcellation system is described which encompasses the entire cerebral cortex and the cerebellum. The cortical system modified version of the scheme described by Caviness et al. (1996) and is designed particularly for studies of speech processing. The cerebellum is parcellated into 6 cortical regions of interest (ROIs) and an ROI representing the deep cerebellar nuclei in each hemisphere. The boundaries of each ROI are based on individual anatomical markers that are clearly visible from standard structural MRI acquistions. The system permits averaginh of functional imaging data sets from multiple sujects while accounting for individual anatomical variability. Used in conjuction with region-of-interest analysis techniques such as that described by Nieto-Castanon et al. (2003), the parcellation system provides a more powerful means of analyzing functional data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Grey-White Decision Network is introduced as an application of an on-center, off-surround recurrent cooperative/competitive network for segmentation of magnetic resonance imaging (MRI) brain images. The three layer dynamical system relaxes into a solution where each pixel is labeled as either grey matter, white matter, or "other" matter by considering raw input intensity, edge information, and neighbor interactions. This network is presented as an example of applying a recurrent cooperative/competitive field (RCCF) to a problem with multiple conflicting constraints. Simulations of the network and its phase plane analysis are presented.