603 resultados para Superiority


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate adaptive buffer management techniques for approximate evaluation of sliding window joins over multiple data streams. In many applications, data stream processing systems have limited memory or have to deal with very high speed data streams. In both cases, computing the exact results of joins between these streams may not be feasible, mainly because the buffers used to compute the joins contain much smaller number of tuples than the tuples contained in the sliding windows. Therefore, a stream buffer management policy is needed in that case. We show that the buffer replacement policy is an important determinant of the quality of the produced results. To that end, we propose GreedyDual-Join (GDJ) an adaptive and locality-aware buffering technique for managing these buffers. GDJ exploits the temporal correlations (at both long and short time scales), which we found to be prevalent in many real data streams. We note that our algorithm is readily applicable to multiple data streams and multiple joins and requires almost no additional system resources. We report results of an experimental study using both synthetic and real-world data sets. Our results demonstrate the superiority and flexibility of our approach when contrasted to other recently proposed techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The popularity of TCP/IP coupled with the premise of high speed communication using Asynchronous Transfer Mode (ATM) technology have prompted the network research community to propose a number of techniques to adapt TCP/IP to ATM network environments. ATM offers Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services for best-effort traffic, such as conventional file transfer. However, recent studies have shown that TCP/IP, when implemented using ABR or UBR, leads to serious performance degradations, especially when the utilization of network resources (such as switch buffers) is high. Proposed techniques-switch-level enhancements, for example-that attempt to patch up TCP/IP over ATMs have had limited success in alleviating this problem. The major reason for TCP/IP's poor performance over ATMs has been consistently attributed to packet fragmentation, which is the result of ATM's 53-byte cell-oriented switching architecture. In this paper, we present a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. At the core of TCP Boston is the Adaptive Information Dispersal Algorithm (AIDA), an efficient encoding technique that allows for dynamic redundancy control. AIDA makes TCP/IP's performance less sensitive to cell losses, thus ensuring a graceful degradation of TCP/IP's performance when faced with congested resources. In this paper, we introduce AIDA and overview the main features of TCP Boston. We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput) and application-centric metrics (e.g., response time).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While ATM bandwidth-reservation techniques are able to offer the guarantees necessary for the delivery of real-time streams in many applications (e.g. live audio and video), they suffer from many disadvantages that make them inattractive (or impractical) for many others. These limitations coupled with the flexibility and popularity of TCP/IP as a best-effort transport protocol have prompted the network research community to propose and implement a number of techniques that adapt TCP/IP to the Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services in ATM network environments. This allows these environments to smoothly integrate (and make use of) currently available TCP-based applications and services without much (if any) modifications. However, recent studies have shown that TCP/IP, when implemented over ATM networks, is susceptible to serious performance limitations. In a recently completed study, we have unveiled a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. In this paper, we demonstrate the real-time features of TCP Boston that allow communication bandwidth to be traded off for timeliness. We start with an overview of the protocol. Next, we analytically characterize the dynamic redundancy control features of TCP Boston. Next, We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput and percent of missed deadlines) and real-time application-centric metrics (e.g., response time and jitter).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increased diversity of Internet application requirements has spurred recent interests in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. In this paper, we define a new spectrum of window-based congestion control algorithms that are TCP-friendly as well as TCP-compatible under RED. Contrary to previous memory-less controls, our algorithms utilize history information in their control rules. Our proposed algorithms have two salient features: (1) They enable a wider region of TCP-friendliness, and thus more flexibility in trading off among smoothness, aggressiveness, and responsiveness; and (2) they ensure a faster convergence to fairness under a wide range of system conditions. We demonstrate analytically and through extensive ns simulations the steady-state and transient behaviors of several instances of this new spectrum of algorithms. In particular, SIMD is one instance in which the congestion window is increased super-linearly with time since the detection of the last loss. Compared to recently proposed TCP-friendly AIMD and binomial algorithms, we demonstrate the superiority of SIMD in: (1) adapting to sudden increases in available bandwidth, while maintaining competitive smoothness and responsiveness; and (2) rapidly converging to fairness and efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

End-to-End differentiation between wireless and congestion loss can equip TCP control so it operates effectively in a hybrid wired/wireless environment. Our approach integrates two techniques: packet loss pairs (PLP) and Hidden Markov Modeling (HMM). A packet loss pair is formed by two back-to-back packets, where one packet is lost while the second packet is successfully received. The purpose is for the second packet to carry the state of the network path, namely the round trip time (RTT), at the time the other packet is lost. Under realistic conditions, PLP provides strong differentiation between congestion and wireless type of loss based on distinguishable RTT distributions. An HMM is then trained so observed RTTs can be mapped to model states that represent either congestion loss or wireless loss. Extensive simulations confirm the accuracy of our HMM-based technique in classifying the cause of a packet loss. We also show the superiority of our technique over the Vegas predictor, which was recently found to perform best and which exemplifies other existing loss labeling techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web caching aims to reduce network traffic, server load, and user-perceived retrieval delays by replicating "popular" content on proxy caches that are strategically placed within the network. While key to effective cache utilization, popularity information (e.g. relative access frequencies of objects requested through a proxy) is seldom incorporated directly in cache replacement algorithms. Rather, other properties of the request stream (e.g. temporal locality and content size), which are easier to capture in an on-line fashion, are used to indirectly infer popularity information, and hence drive cache replacement policies. Recent studies suggest that the correlation between these secondary properties and popularity is weakening due in part to the prevalence of efficient client and proxy caches (which tend to mask these correlations). This trend points to the need for proxy cache replacement algorithms that directly capture and use popularity information. In this paper, we (1) present an on-line algorithm that effectively captures and maintains an accurate popularity profile of Web objects requested through a caching proxy, (2) propose a novel cache replacement policy that uses such information to generalize the well-known GreedyDual-Size algorithm, and (3) show the superiority of our proposed algorithm by comparing it to a host of recently-proposed and widely-used algorithms using extensive trace-driven simulations and a variety of performance metrics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In [previous papers] we presented the design, specification and proof of correctness of a fully distributed location management scheme for PCS networks and argued that fully replicating location information is both appropriate and efficient for small PCS networks. In this paper, we analyze the performance of this scheme. Then, we extend the scheme in a hierarchical environment so as to scale to large PCS networks. Through extensive numerical results, we show the superiority of our scheme compared to the current IS-41 standard.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An increasing number of applications, such as distributed interactive simulation, live auctions, distributed games and collaborative systems, require the network to provide a reliable multicast service. This service enables one sender to reliably transmit data to multiple receivers. Reliability is traditionally achieved by having receivers send negative acknowledgments (NACKs) to request from the sender the retransmission of lost (or missing) data packets. However, this Automatic Repeat reQuest (ARQ) approach results in the well-known NACK implosion problem at the sender. Many reliable multicast protocols have been recently proposed to reduce NACK implosion. But, the message overhead due to NACK requests remains significant. Another approach, based on Forward Error Correction (FEC), requires the sender to encode additional redundant information so that a receiver can independently recover from losses. However, due to the lack of feedback from receivers, it is impossible for the sender to determine how much redundancy is needed. In this paper, we propose a new reliable multicast protocol, called ARM for Adaptive Reliable Multicast. Our protocol integrates ARQ and FEC techniques. The objectives of ARM are (1) reduce the message overhead due to NACK requests, (2) reduce the amount of data transmission, and (3) reduce the time it takes for all receivers to receive the data intact (without loss). During data transmission, the sender periodically informs the receivers of the number of packets that are yet to be transmitted. Based on this information, each receiver predicts whether this amount is enough to recover its losses. Only if it is not enough, that the receiver requests the sender to encode additional redundant packets. Using ns simulations, we show the superiority of our hybrid ARQ-FEC protocol over the well-known Scalable Reliable Multicast (SRM) protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Serologic methods have been used widely to test for celiac disease and have gained importance in diagnostic definition and in new epidemiologic findings. However, there is no standardization, and there are no reference protocols and materials. METHODS: The European working group on Serological Screening for Celiac Disease has defined robust noncommercial test protocols for immunoglobulin (Ig)G and IgA gliadin antibodies and for IgA autoantibodies against endomysium and tissue transglutaminase. Standard curves were linear in the decisive range, and intra-assay variation coefficients were less than 5% to 10%. Calibration was performed with a group reference serum. Joint cutoff limits were used. Seven laboratories took part in the final collaborative study on 252 randomized sera classified by histology (103 pediatric and adult patients with active celiac disease, 89 disease control subjects, and 60 blood donors). RESULTS: IgA autoantibodies against endomysium and tissue transglutaminase rendered superior sensitivity (90% and 93%, respectively) and specificity (99% and 95%, respectively) over IgA and IgG gliadin antibodies. Tissue transglutaminase antibody testing showed superior receiver operating characteristic performance compared with gliadin antibodies. The K values for interlaboratory reproducibility showed superiority for IgA endomysium (0.93) in comparison with tissue transglutaminase antibodies (0.83) and gliadin antibodies (0.82 for IgG, 0.62 for IgA). CONCLUSIONS: Basic criteria of standardization and quality assessment must be fulfilled by any given test protocol proposed for serologic investigation of celiac disease. The working group has produced robust test protocols and reference materials available for standardization to further improve reliability of serologic testing for celiac disease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In preventing invasive fungal disease (IFD) in patients with acute myelogenous leukemia (AML) or myelodysplastic syndrome (MDS), clinical trials demonstrated efficacy of posaconazole over fluconazole and itraconazole. However, effectiveness of posaconazole has not been investigated in the United States in real-world setting outside the environment of controlled clinical trial. We performed a single-center, retrospective cohort study of 130 evaluable patients ≥18 years of age admitted to Duke University Hospital between 2004 and 2010 who received either posaconazole or fluconazole as prophylaxis during first induction or first reinduction chemotherapy for AML or MDS. The primary endpoint was possible, probable, or definite breakthrough IFD. Baseline characteristics were well balanced between groups, except that posaconazole recipients received reinduction chemotherapy and cytarabine more frequently. IFD occurred in 17/65 (27.0%) in the fluconazole group and in 6/65 (9.2%) in the posaconazole group (P = 0.012). Definite/probable IFDs occurred in 7 (10.8%) and 0 patients (0%), respectively (P = 0.0013). In multivariate analysis, fluconazole prophylaxis and duration of neutropenia were predictors of IFD. Mortality was similar between groups. This study demonstrates superior effectiveness of posaconazole over fluconazole as prophylaxis of IFD in AML and MDS patients. Such superiority did not translate to reductions in 100-day all-cause mortality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Heterosis, the phenotypic superiority of a hybrid over its parents, has been demonstrated for many traits in Arabidopsis thaliana, but its effect on defence remains largely unexplored. Here, we show that hybrids between some A. thaliana accessions show increased resistance to the biotrophic bacterial pathogen Pseudomonas syringae pv. tomato (Pst) DC3000. Comparisons of transcriptomes between these hybrids and their parents after inoculation reveal that several key salicylic acid (SA) biosynthesis genes are significantly upregulated in hybrids. Moreover, SA levels are higher in hybrids than in either parent. Increased resistance to Pst DC3000 is significantly compromised in hybrids of pad4 mutants in which the SA biosynthesis pathway is blocked. Finally, increased histone H3 acetylation of key SA biosynthesis genes correlates with their upregulation in infected hybrids. Our data demonstrate that enhanced activation of SA biosynthesis in A. thaliana hybrids may contribute to their increased resistance to a biotrophic bacterial pathogen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Dolutegravir (S/GSK1349572), a once-daily, unboosted integrase inhibitor, was recently approved in the United States for the treatment of human immunodeficiency virus type 1 (HIV-1) infection in combination with other antiretroviral agents. Dolutegravir, in combination with abacavir-lamivudine, may provide a simplified regimen. METHODS: We conducted a randomized, double-blind, phase 3 study involving adult participants who had not received previous therapy for HIV-1 infection and who had an HIV-1 RNA level of 1000 copies per milliliter or more. Participants were randomly assigned to dolutegravir at a dose of 50 mg plus abacavir-lamivudine once daily (DTG-ABC-3TC group) or combination therapy with efavirenz-tenofovir disoproxil fumarate (DF)-emtricitabine once daily (EFV-TDF-FTC group). The primary end point was the proportion of participants with an HIV-1 RNA level of less than 50 copies per milliliter at week 48. Secondary end points included the time to viral suppression, the change from baseline in CD4+ T-cell count, safety, and viral resistance. RESULTS: A total of 833 participants received at least one dose of study drug. At week 48, the proportion of participants with an HIV-1 RNA level of less than 50 copies per milliliter was significantly higher in the DTG-ABC-3TC group than in the EFV-TDF-FTC group (88% vs. 81%, P = 0.003), thus meeting the criterion for superiority. The DTG-ABC-3TC group had a shorter median time to viral suppression than did the EFV-TDF-FTC group (28 vs. 84 days, P<0.001), as well as greater increases in CD4+ T-cell count (267 vs. 208 per cubic millimeter, P<0.001). The proportion of participants who discontinued therapy owing to adverse events was lower in the DTG-ABC-3TC group than in the EFV-TDF-FTC group (2% vs. 10%); rash and neuropsychiatric events (including abnormal dreams, anxiety, dizziness, and somnolence) were significantly more common in the EFV-TDF-FTC group, whereas insomnia was reported more frequently in the DTG-ABC-3TC group. No participants in the DTG-ABC-3TC group had detectable antiviral resistance; one tenofovir DF-associated mutation and four efavirenz-associated mutations were detected in participants with virologic failure in the EFV-TDF-FTC group. CONCLUSIONS: Dolutegravir plus abacavir-lamivudine had a better safety profile and was more effective through 48 weeks than the regimen with efavirenz-tenofovir DF-emtricitabine. Copyright © 2013 Massachusetts Medical Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A spectrally efficient strategy is proposed for cooperative multiple access (CMA) channels in a centralized communication environment with $N$ users. By applying superposition coding, each user will transmit a mixture containing its own information as well as the other users', which means that each user shares parts of its power with the others. The use of superposition coding in cooperative networks was first proposed in , which will be generalized to a multiple-user scenario in this paper. Since the proposed CMA system can be seen as a precoded point-to-point multiple-antenna system, its performance can be best evaluated using the diversity-multiplexing tradeoff. By carefully categorizing the outage events, the diversity-multiplexing tradeoff can be obtained, which shows that the proposed cooperative strategy can achieve larger diversity/multiplexing gain than the compared transmission schemes at any diversity/multiplexing gain. Furthermore, it is demonstrated that the proposed strategy can achieve optimal tradeoff for multiplexing gains $0leq r leq 1$ whereas the compared cooperative scheme is only optimal for $0leq r leq ({1}/{N})$. As discussed in the paper, such superiority of the proposed CMA system is due to the fact that the relaying transmission does not consume extra channel use and, hence, the deteriorating effect of cooperative communication on the data rate is effectively limited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Key pre-distribution schemes have been proposed as means to overcome Wireless Sensor Networks constraints such as limited communication and processing power. Two sensor nodes can establish a secure link with some probability based on the information stored in their memories though it is not always possible that two sensor nodes may set up a secure link. In this paper, we propose a new approach that elects trusted common nodes called ”Proxies” which reside on an existing secure path linking two sensor nodes. These sensor nodes are used to send the generated key which will be divided into parts (nuggets) according to the number of elected proxies. Our approach has been assessed against previously developed algorithms and the results show that our algorithm discovers proxies more quickly which are closer to both end nodes, thus producing shorter path lengths. We have also assessed the impact of our algorithm on the average time to establish a secure link when the transmitter and receiver of the sensor nodes are ”ON”. The results show the superiority of our algorithm in this regard. Overall, the proposed algorithm is well suited for Wireless Sensor Networks.