749 resultados para 2004-07-BS
Resumo:
Kargl, Florian; Meyer, A.; Horbach, J.; Kob, W., (2004) 'Channel formation and intermediate range order in sodium silicate melts and glasses', Physical Review Letters 93(2) pp.027801 RAE2008
Resumo:
M. T. Rose, T. E. C. Weekes and P. Rowlinson (2004). Individual variation in the milk yield response to bovine somatotropin in dairy cows. Journal of Dairy Science, 87(7), 2024-2031. Sponsorship: industry RAE2008
Resumo:
One of TCP's critical tasks is to determine which packets are lost in the network, as a basis for control actions (flow control and packet retransmission). Modern TCP implementations use two mechanisms: timeout, and fast retransmit. Detection via timeout is necessarily a time-consuming operation; fast retransmit, while much quicker, is only effective for a small fraction of packet losses. In this paper we consider the problem of packet loss detection in TCP more generally. We concentrate on the fact that TCP's control actions are necessarily triggered by inference of packet loss, rather than conclusive knowledge. This suggests that one might analyze TCP's packet loss detection in a standard inferencing framework based on probability of detection and probability of false alarm. This paper makes two contributions to that end: First, we study an example of more general packet loss inference, namely optimal Bayesian packet loss detection based on round trip time. We show that for long-lived flows, it is frequently possible to achieve high detection probability and low false alarm probability based on measured round trip time. Second, we construct an analytic performance model that incorporates general packet loss inference into TCP. We show that for realistic detection and false alarm probabilities (as are achievable via our Bayesian detector) and for moderate packet loss rates, the use of more general packet loss inference in TCP can improve throughput by as much as 25%.
Resumo:
We leverage the buffering capabilities of end-systems to achieve scalable, asynchronous delivery of streams in a peer-to-peer environment. Unlike existing cache-and-relay schemes, we propose a distributed prefetching protocol where peers prefetch and store portions of the streaming media ahead of their playout time, thus not only turning themselves to possible sources for other peers but their prefetched data can allow them to overcome the departure of their source-peer. This stands in sharp contrast to existing cache-and-relay schemes where the departure of the source-peer forces its peer children to go the original server, thus disrupting their service and increasing server and network load. Through mathematical analysis and simulations, we show the effectiveness of maintaining such asynchronous multicasts from several source-peers to other children peers, and the efficacy of prefetching in the face of peer departures. We confirm the scalability of our dPAM protocol as it is shown to significantly reduce server load.
Resumo:
In this paper we discuss a new type of query in Spatial Databases, called Trip Planning Query (TPQ). Given a set of points P in space, where each point belongs to a category, and given two points s and e, TPQ asks for the best trip that starts at s, passes through exactly one point from each category, and ends at e. An example of a TPQ is when a user wants to visit a set of different places and at the same time minimize the total travelling cost, e.g. what is the shortest travelling plan for me to visit an automobile shop, a CVS pharmacy outlet, and a Best Buy shop along my trip from A to B? The trip planning query is an extension of the well-known TSP problem and therefore is NP-hard. The difficulty of this query lies in the existence of multiple choices for each category. In this paper, we first study fast approximation algorithms for the trip planning query in a metric space, assuming that the data set fits in main memory, and give the theory analysis of their approximation bounds. Then, the trip planning query is examined for data sets that do not fit in main memory and must be stored on disk. For the disk-resident data, we consider two cases. In one case, we assume that the points are located in Euclidean space and indexed with an Rtree. In the other case, we consider the problem of points that lie on the edges of a spatial network (e.g. road network) and the distance between two points is defined using the shortest distance over the network. Finally, we give an experimental evaluation of the proposed algorithms using synthetic data sets generated on real road networks.
Resumo:
We investigate adaptive buffer management techniques for approximate evaluation of sliding window joins over multiple data streams. In many applications, data stream processing systems have limited memory or have to deal with very high speed data streams. In both cases, computing the exact results of joins between these streams may not be feasible, mainly because the buffers used to compute the joins contain much smaller number of tuples than the tuples contained in the sliding windows. Therefore, a stream buffer management policy is needed in that case. We show that the buffer replacement policy is an important determinant of the quality of the produced results. To that end, we propose GreedyDual-Join (GDJ) an adaptive and locality-aware buffering technique for managing these buffers. GDJ exploits the temporal correlations (at both long and short time scales), which we found to be prevalent in many real data streams. We note that our algorithm is readily applicable to multiple data streams and multiple joins and requires almost no additional system resources. We report results of an experimental study using both synthetic and real-world data sets. Our results demonstrate the superiority and flexibility of our approach when contrasted to other recently proposed techniques.
Resumo:
Routing protocols in wireless sensor networks (WSN) face two main challenges: first, the challenging environments in which WSNs are deployed negatively affect the quality of the routing process. Therefore, routing protocols for WSNs should recognize and react to node failures and packet losses. Second, sensor nodes are battery-powered, which makes power a scarce resource. Routing protocols should optimize power consumption to prolong the lifetime of the WSN. In this paper, we present a new adaptive routing protocol for WSNs, we call it M^2RC. M^2RC has two phases: mesh establishment phase and data forwarding phase. In the first phase, M^2RC establishes the routing state to enable multipath data forwarding. In the second phase, M^2RC forwards data packets from the source to the sink. Targeting hop-by-hop reliability, an M^2RC forwarding node waits for an acknowledgement (ACK) that its packets were correctly received at the next neighbor. Based on this feedback, an M^2RC node applies multiplicative-increase/additive-decrease (MIAD) to control the number of neighbors targeted by its packet broadcast. We simulated M^2RC in the ns-2 simulator and compared it to GRAB, Max-power, and Min-power routing schemes. Our simulations show that M^2RC achieves the highest throughput with at least 10-30% less consumed power per delivered report in scenarios where a certain number of nodes unexpectedly fail.
Resumo:
With the increased use of "Virtual Machines" (VMs) as vehicles that isolate applications running on the same host, it is necessary to devise techniques that enable multiple VMs to share underlying resources both fairly and efficiently. To that end, one common approach is to deploy complex resource management techniques in the hosting infrastructure. Alternately, in this paper, we advocate the use of self-adaptation in the VMs themselves based on feedback about resource usage and availability. Consequently, we define a "Friendly" VM (FVM) to be a virtual machine that adjusts its demand for system resources, so that they are both efficiently and fairly allocated to competing FVMs. Such properties are ensured using one of many provably convergent control rules, such as AIMD. By adopting this distributed application-based approach to resource management, it is not necessary to make assumptions about the underlying resources nor about the requirements of FVMs competing for these resources. To demonstrate the elegance and simplicity of our approach, we present a prototype implementation of our FVM framework in User-Mode Linux (UML)-an implementation that consists of less than 500 lines of code changes to UML. We present an analytic, control-theoretic model of FVM adaptation, which establishes convergence and fairness properties. These properties are also backed up with experimental results using our prototype FVM implementation.
Resumo:
Recent work in sensor databases has focused extensively on distributed query problems, notably distributed computation of aggregates. Existing methods for computing aggregates broadcast queries to all sensors and use in-network aggregation of responses to minimize messaging costs. In this work, we focus on uniform random sampling across nodes, which can serve both as an alternative building block for aggregation and as an integral component of many other useful randomized algorithms. Prior to our work, the best existing proposals for uniform random sampling of sensors involve contacting all nodes in the network. We propose a practical method which is only approximately uniform, but contacts a number of sensors proportional to the diameter of the network instead of its size. The approximation achieved is tunably close to exact uniform sampling, and only relies on well-known existing primitives, namely geographic routing, distributed computation of Voronoi regions and von Neumann's rejection method. Ultimately, our sampling algorithm has the same worst-case asymptotic cost as routing a point-to-point message, and thus it is asymptotically optimal among request/reply-based sampling methods. We provide experimental results demonstrating the effectiveness of our algorithm on both synthetic and real sensor topologies.
Resumo:
PURPOSE: To compare health-related quality of life (HRQOL) in patients with metastatic breast cancer receiving the combination of doxorubicin and paclitaxel (AT) or doxorubicin and cyclophosphamide (AC) as first-line chemotherapy treatment. PATIENTS AND METHODS: Eligible patients (n = 275) with anthracycline-naive measurable metastatic breast cancer were randomly assigned to AT (doxorubicin 60 mg/m(2) as an intravenous bolus plus paclitaxel 175 mg/m(2) as a 3-hour infusion) or AC (doxorubicin 60 mg/m(2) plus cyclophosphamide 600 mg/m(2)) every 3 weeks for a maximum of six cycles. Dose escalation of paclitaxel (200 mg/m(2)) and cyclophosphamide (750 mg/m(2)) was planned at cycle 2 to reach equivalent myelosuppression in the two groups. HRQOL was assessed with the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire C30 and the EORTC Breast Module at baseline and the start of cycles 2, 4, and 6, and 3 months after the last cycle. RESULTS: Seventy-nine percent of the patients (n = 219) completed a baseline measure. However, there were no statistically significant differences in HRQOL between the two treatment groups. In both groups, selected aspects of HRQOL were impaired over time, with increased fatigue, although some clinically significant improvements in emotional functioning were seen, as well as a reduction in pain over time. Overall, global quality of life was maintained in both treatment groups. CONCLUSION: This information is important when advising women patients of the expected HRQOL consequences of treatment regimens and should help clinicians and their patients make informed treatment decisions.
Resumo:
This study explored the factors associated with state-level allocations to tobacco-control programs. The primary research question was whether public sentiment regarding tobacco control was a significant factor in the states' 2001 budget decisions. In addition to public opinion, several additional political and economic measures were considered. Significant associations were found between our outcome, state-level tobacco-control funding per capita, and key variables of interest including public opinion, amount of tobacco settlement received, the party affiliation of the governor, the state's smoking rate, excise tax revenue received, and whether the state was a major producer of tobacco. The findings from this study supported our hypothesis that states with citizens who favor more restrictive indoor air policies allocate more to tobacco control. Effective public education to change public opinion and the cultural norms surrounding smoking may affect political decisions and, in turn, increase funding for crucial public health programs.
Resumo:
Population introduction is an important tool for ecosystem restoration. However, before introductions should be conducted, it is important to evaluate the genetic, phenotypic and ecological suitability of possible replacement populations. Careful genetic analysis is particularly important if it is suspected that the extirpated population was unique or genetically divergent. On the island of Martha's Vineyard, Massachusetts, the introduction of greater prairie chickens (Tympanuchus cupido pinnatus) to replace the extinct heath hen (T. cupido cupido) is being considered as part of an ecosystem restoration project. Martha's Vineyard was home to the last remaining heath hen population until its extinction in 1932. We conducted this study to aid in determining the suitability of greater prairie chickens as a possible replacement for the heath hen. We examined mitochondrial control region sequences from extant populations of all prairie grouse species (Tympanuchus) and from museum skin heath hen specimens. Our data suggest that the Martha's Vineyard heath hen population represents a divergent mitochondrial lineage. This result is attributable either to a long period of geographical isolation from other prairie grouse populations or to a population bottleneck resulting from human disturbance. The mtDNA diagnosability of the heath hen contrasts with the network of mtDNA haplotypes of other prairie grouse (T. cupido attwateri, T. pallidicinctus and T. phasianellus), which do not form distinguishable mtDNA groupings. Our findings suggest that the Martha's Vineyard heath hen was more genetically isolated than are current populations of prairie grouse and place the emphasis for future research on examining prairie grouse adaptations to different habitat types to assess ecological exchangeability between heath hens and greater prairie chickens.
Resumo:
Ground based remote sensing techniques are used to measure volcanic SO2 fluxes in efforts to characterise volcanic activity. As these measurements are made several km from source there is the potential for in-plume chemical transformation of SO2 to sulphate aerosol (conversion rates are dependent on meteorological conditions), complicating interpretation of observed SO2 flux trends. In contrast to anthropogenic plumes, SO2 lifetimes are poorly constrained for tropospheric volcanic plumes, where the few previous loss rate estimates vary widely (from ≪1 to >99% per hour .We report experiments conducted on the boundary layer plume of Masaya volcano, Nicaragua during the dry season. We found that SO2 fluxes showed negligible variation with plume age or diurnal variations in temperature, relative humidity and insolation, providing confirmation that remote SO2 flux measurements (typically of ≈500-2000 s old plumes) are reliable proxies for source emissions for ash free tropospheric plumes not emitted into cloud or fog. Copyright 2004 by the American Geophysical Union.
Resumo:
info:eu-repo/semantics/nonPublished
Resumo:
This paper considers a variant of the classical problem of minimizing makespan in a two-machine flow shop. In this variant, each job has three operations, where the first operation must be performed on the first machine, the second operation can be performed on either machine but cannot be preempted, and the third operation must be performed on the second machine. The NP-hard nature of the problem motivates the design and analysis of approximation algorithms. It is shown that a schedule in which the operations are sequenced arbitrarily, but without inserted machine idle time, has a worst-case performance ratio of 2. Also, an algorithm that constructs four schedules and selects the best is shown to have a worst-case performance ratio of 3/2. A polynomial time approximation scheme (PTAS) is also presented.