32 resultados para statistical reports

em Boston University Digital Common


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study a two-step approach for inferring network traffic demands. First we elaborate and evaluate a modeling approach for generating good starting points to be fed to iterative statistical inference techniques. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we evaluate and compare alternative mechanisms for generating starting points and the convergence characteristics of our EM algorithm against a recently proposed Weighted Least Squares approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We demonstrate that if two probability distributions D and E of sufficiently small min-entropy have statistical difference ε, then the direct-product distributions D^l and E^l have statistical difference at least roughly ε\s√l, provided that l is sufficiently small, smaller than roughly ε^{4/3}. Previously known bounds did not work for few repetitions l, requiring l>ε^2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality of Service (QoS) guarantees are required by an increasing number of applications to ensure a minimal level of fidelity in the delivery of application data units through the network. Application-level QoS does not necessarily follow from any transport-level QoS guarantees regarding the delivery of the individual cells (e.g. ATM cells) which comprise the application's data units. The distinction between application-level and transport-level QoS guarantees is due primarily to the fragmentation that occurs when transmitting large application data units (e.g. IP packets, or video frames) using much smaller network cells, whereby the partial delivery of a data unit is useless; and, bandwidth spent to partially transmit the data unit is wasted. The data units transmitted by an application may vary in size while being constant in rate, which results in a variable bit rate (VBR) data flow. That data flow requires QoS guarantees. Statistical multiplexing is inadequate, because no guarantees can be made and no firewall property exists between different data flows. In this paper, we present a novel resource management paradigm for the maintenance of application-level QoS for VBR flows. Our paradigm is based on Statistical Rate Monotonic Scheduling (SRMS), in which (1) each application generates its variable-size data units at a fixed rate, (2) the partial delivery of data units is of no value to the application, and (3) the QoS guarantee extended to the application is the probability that an arbitrary data unit will be successfully transmitted through the network to/from the application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical properties offast-slow Ellias-Grossberg oscillators are studied in response to deterministic and noisy inputs. Oscillatory responses remain stable in noise due to the slow inhibitory variable, which establishes an adaptation level that centers the oscillatory responses of the fast excitatory variable to deterministic and noisy inputs. Competitive interactions between oscillators improve the stability in noise. Although individual oscillation amplitudes decrease with input amplitude, the average to'tal activity increases with input amplitude, thereby suggesting that oscillator output is evaluated by a slow process at downstream network sites.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

http://www.archive.org/details/firsttenannualre00amerrich

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND:The Framingham Heart Study (FHS), founded in 1948 to examine the epidemiology of cardiovascular disease, is among the most comprehensively characterized multi-generational studies in the world. Many collected phenotypes have substantial genetic contributors; yet most genetic determinants remain to be identified. Using single nucleotide polymorphisms (SNPs) from a 100K genome-wide scan, we examine the associations of common polymorphisms with phenotypic variation in this community-based cohort and provide a full-disclosure, web-based resource of results for future replication studies.METHODS:Adult participants (n = 1345) of the largest 310 pedigrees in the FHS, many biologically related, were genotyped with the 100K Affymetrix GeneChip. These genotypes were used to assess their contribution to 987 phenotypes collected in FHS over 56 years of follow up, including: cardiovascular risk factors and biomarkers; subclinical and clinical cardiovascular disease; cancer and longevity traits; and traits in pulmonary, sleep, neurology, renal, and bone domains. We conducted genome-wide variance components linkage and population-based and family-based association tests.RESULTS:The participants were white of European descent and from the FHS Original and Offspring Cohorts (examination 1 Offspring mean age 32 +/- 9 years, 54% women). This overview summarizes the methods, selected findings and limitations of the results presented in the accompanying series of 17 manuscripts. The presented association results are based on 70,897 autosomal SNPs meeting the following criteria: minor allele frequency [greater than or equal to] 10%, genotype call rate [greater than or equal to] 80%, Hardy-Weinberg equilibrium p-value [greater than or equal to] 0.001, and satisfying Mendelian consistency. Linkage analyses are based on 11,200 SNPs and short-tandem repeats. Results of phenotype-genotype linkages and associations for all autosomal SNPs are posted on the NCBI dbGaP website at http://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?id=phs000007.CONCLUSION:We have created a full-disclosure resource of results, posted on the dbGaP website, from a genome-wide association study in the FHS. Because we used three analytical approaches to examine the association and linkage of 987 phenotypes with thousands of SNPs, our results must be considered hypothesis-generating and need to be replicated. Results from the FHS 100K project with NCBI web posting provides a resource for investigators to identify high priority findings for replication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A probabilistic, nonlinear supervised learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA employs a set of several forward mapping functions that are estimated automatically from training data. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). The SMA can model ambiguous, one-to-many mappings that may yield multiple valid output hypotheses. Once learned, the mapping functions generate a set of output hypotheses for a given input via a statistical inference procedure. The SMA inference procedure incorporates an inverse mapping or feedback function in evaluating the likelihood of each of the hypothesis. Possible feedback functions include computer graphics rendering routines that can generate images for given hypotheses. The SMA employs a variant of the Expectation-Maximization algorithm for simultaneous learning of the specialized domains along with the mapping functions, and approximate strategies for inference. The framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human’s body or hands, given silhouettes from a single image. The accuracy and stability of the SMA are also tested using synthetic images of human bodies and hands, where ground truth is known.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speculative service implies that a client's request for a document is serviced by sending, in addition to the document requested, a number of other documents (or pointers thereto) that the server speculates will be requested by the client in the near future. This speculation is based on statistical information that the server maintains for each document it serves. The notion of speculative service is analogous to prefetching, which is used to improve cache performance in distributed/parallel shared memory systems, with the exception that servers (not clients) control when and what to prefetch. Using trace simulations based on the logs of our departmental HTTP server http://cs-www.bu.edu, we show that both server load and service time could be reduced considerably, if speculative service is used. This is above and beyond what is currently achievable using client-side caching [3] and server-side dissemination [2]. We identify a number of parameters that could be used to fine-tune the level of speculation performed by the server.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The explosion of WWW traffic necessitates an accurate picture of WWW use, and in particular requires a good understanding of client requests for WWW documents. To address this need, we have collected traces of actual executions of NCSA Mosaic, reflecting over half a million user requests for WWW documents. In this paper we describe the methods we used to collect our traces, and the formats of the collected data. Next, we present a descriptive statistical summary of the traces we collected, which identifies a number of trends and reference patterns in WWW use. In particular, we show that many characteristics of WWW use can be modelled using power-law distributions, including the distribution of document sizes, the popularity of documents as a function of size, the distribution of user requests for documents, and the number of references to documents as a function of their overall rank in popularity (Zipf's law). Finally, we show how the power-law distributions derived from our traces can be used to guide system designers interested in caching WWW documents.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new deformable shape-based method for color region segmentation is described. The method includes two stages: over-segmentation using a traditional color region segmentation algorithm, followed by deformable model-based region merging via grouping and hypothesis selection. During the second stage, region merging and object identification are executed simultaneously. A statistical shape model is used to estimate the likelihood of region groupings and model hypotheses. The prior distribution on deformation parameters is precomputed using principal component analysis over a training set of region groupings. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with similarly colored adjacent objects. Furthermore, the recovered parametric shape model can be used directly in object recognition and comparison. Experiments in segmentation and image retrieval are reported.