16 resultados para 346.023 R457L
em Boston University Digital Common
Resumo:
This paper introduces BoostMap, a method that can significantly reduce retrieval time in image and video database systems that employ computationally expensive distance measures, metric or non-metric. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. Embedding construction is formulated as a machine learning task, where AdaBoost is used to combine many simple, 1D embeddings into a multidimensional embedding that preserves a significant amount of the proximity structure in the original space. Performance is evaluated in a hand pose estimation system, and a dynamic gesture recognition system, where the proposed method is used to retrieve approximate nearest neighbors under expensive image and video similarity measures. In both systems, BoostMap significantly increases efficiency, with minimal losses in accuracy. Moreover, the experiments indicate that BoostMap compares favorably with existing embedding methods that have been employed in computer vision and database applications, i.e., FastMap and Bourgain embeddings.
Resumo:
Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.
Resumo:
The design of programs for broadcast disks which incorporate real-time and fault-tolerance requirements is considered. A generalized model for real-time fault-tolerant broadcast disks is defined. It is shown that designing programs for broadcast disks specified in this model is closely related to the scheduling of pinwheel task systems. Some new results in pinwheel scheduling theory are derived, which facilitate the efficient generation of real-time fault-tolerant broadcast disk programs.
Resumo:
Internet streaming applications are adversely affected by network conditions such as high packet loss rates and long delays. This paper aims at mitigating such effects by leveraging the availability of client-side caching proxies. We present a novel caching architecture (and associated cache management algorithms) that turn edge caches into accelerators of streaming media delivery. A salient feature of our caching algorithms is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms we propose are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Using realistic models of Internet bandwidth (derived from proxy cache logs and measured over real Internet paths), we have conducted extensive simulations to evaluate the performance of various cache management alternatives. Our experiments demonstrate that network-aware caching algorithms can significantly reduce service delay and improve overall stream quality. Also, our experiments show that partial caching is particularly effective when bandwidth variability is not very high.
Resumo:
We designed the Eyebrow-Clicker, a camera-based human computer interface system that implements a new form of binary switch. When the user raises his or her eyebrows, the binary switch is activated and a selection command is issued. The Eyebrow-Clicker thus replaces the "click" functionality of a mouse. The system initializes itself by detecting the user's eyes and eyebrows, tracks these features at frame rate, and recovers in the event of errors. The initialization uses the natural blinking of the human eye to select suitable templates for tracking. Once execution has begun, a user therefore never has to restart the program or even touch the computer. In our experiments with human-computer interaction software, the system successfully determined 93% of the time when a user raised his eyebrows.
Resumo:
We consider the problem of building robust fuzzy extractors, which allow two parties holding similar random variables W, W' to agree on a secret key R in the presence of an active adversary. Robust fuzzy extractors were defined by Dodis et al. in Crypto 2006 [6] to be noninteractive, i.e., only one message P, which can be modified by an unbounded adversary, can pass from one party to the other. This allows them to be used by a single party at different points in time (e.g., for key recovery or biometric authentication), but also presents an additional challenge: what if R is used, and thus possibly observed by the adversary, before the adversary has a chance to modify P. Fuzzy extractors secure against such a strong attack are called post-application robust. We construct a fuzzy extractor with post-application robustness that extracts a shared secret key of up to (2m−n)/2 bits (depending on error-tolerance and security parameters), where n is the bit-length and m is the entropy of W . The previously best known result, also of Dodis et al., [6] extracted up to (2m − n)/3 bits (depending on the same parameters).
Resumo:
A common design of an object recognition system has two steps, a detection step followed by a foreground within-class classification step. For example, consider face detection by a boosted cascade of detectors followed by face ID recognition via one-vs-all (OVA) classifiers. Another example is human detection followed by pose recognition. Although the detection step can be quite fast, the foreground within-class classification process can be slow and becomes a bottleneck. In this work, we formulate a filter-and-refine scheme, where the binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the FRGC V2 data set, hand shape detection and parameter estimation on a hand data set and vehicle detection and view angle estimation on a multi-view vehicle data set. On all data sets, our approach has comparable accuracy and is at least five times faster than the brute force approach.
Resumo:
Understanding the nature of the workloads and system demands created by users of the World Wide Web is crucial to properly designing and provisioning Web services. Previous measurements of Web client workloads have been shown to exhibit a number of characteristic features; however, it is not clear how those features may be changing with time. In this study we compare two measurements of Web client workloads separated in time by three years, both captured from the same computing facility at Boston University. The older dataset, obtained in 1995, is well-known in the research literature and has been the basis for a wide variety of studies. The newer dataset was captured in 1998 and is comparable in size to the older dataset. The new dataset has the drawback that the collection of users measured may no longer be representative of general Web users; however using it has the advantage that many comparisons can be drawn more clearly than would be possible using a new, different source of measurement. Our results fall into two categories. First we compare the statistical and distributional properties of Web requests across the two datasets. This serves to reinforce and deepen our understanding of the characteristic statistical properties of Web client requests. We find that the kinds of distributions that best describe document sizes have not changed between 1995 and 1998, although specific values of the distributional parameters are different. Second, we explore the question of how the observed differences in the properties of Web client requests, particularly the popularity and temporal locality properties, affect the potential for Web file caching in the network. We find that for the computing facility represented by our traces between 1995 and 1998, (1) the benefits of using size-based caching policies have diminished; and (2) the potential for caching requested files in the network has declined.
Resumo:
Standard structure from motion algorithms recover 3D structure of points. If a surface representation is desired, for example a piece-wise planar representation, then a two-step procedure typically follows: in the first step the plane-membership of points is first determined manually, and in a subsequent step planes are fitted to the sets of points thus determined, and their parameters are recovered. This paper presents an approach for automatically segmenting planar structures from a sequence of images, and simultaneously estimating their parameters. In the proposed approach the plane-membership of points is determined automatically, and the planar structure parameters are recovered directly in the algorithm rather than indirectly in a post-processing stage. Simulated and real experimental results show the efficacy of this approach.
Resumo:
Accurate head tilt detection has a large potential to aid people with disabilities in the use of human-computer interfaces and provide universal access to communication software. We show how it can be utilized to tab through links on a web page or control a video game with head motions. It may also be useful as a correction method for currently available video-based assistive technology that requires upright facial poses. Few of the existing computer vision methods that detect head rotations in and out of the image plane with reasonable accuracy can operate within the context of a real-time communication interface because the computational expense that they incur is too great. Our method uses a variety of metrics to obtain a robust head tilt estimate without incurring the computational cost of previous methods. Our system runs in real time on a computer with a 2.53 GHz processor, 256 MB of RAM and an inexpensive webcam, using only 55% of the processor cycles.
Resumo:
In this report, we extend our study of the intensity of mistreatment in distributed caching groups due to state interaction. In our earlier work (published as BUCS-TR-2006-003), we analytically showed how this type of mistreatment may appear under homogeneous demand distributions. We provided a simple setting where mistreatment due to state interaction may occur. According to this setting, one or more "overactive" nodes generate disproportionately more requests than the other nodes. In this report, we extend our experimental evaluation of the intensity of mistreatment to which non-overactive nodes are subjected, when the demand distributions are not homogeneous.
Resumo:
Mapping novel terrain from sparse, complex data often requires the resolution of conflicting information from sensors working at different times, locations, and scales, and from experts with different goals and situations. Information fusion methods help resolve inconsistencies in order to distinguish correct from incorrect answers, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods developed here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an objects class is car, vehicle, or man-made. Underlying relationships among objects are assumed to be unknown to the automated system of the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchial knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples.
Resumo:
This article develops the Synchronous Matching Adaptive Resonance Theory (SMART) neural model to explain how the brain may coordinate multiple levels of thalamocortical and corticocortical processing to rapidly learn, and stably remember, important information about a changing world. The model clarifies how bottom-up and top-down processes work together to realize this goal, notably how processes of learning, expectation, attention, resonance, and synchrony are coordinated. The model hereby clarifies, for the first time, how the following levels of brain organization coexist to realize cognitive processing properties that regulate fast learning and stable memory of brain representations: single cell properties, such as spiking dynamics, spike-timing-dependent plasticity (STDP), and acetylcholine modulation; detailed laminar thalamic and cortical circuit designs and their interactions; aggregate cell recordings, such as current-source densities and local field potentials; and single cell and large-scale inter-areal oscillations in the gamma and beta frequency domains. In particular, the model predicts how laminar circuits of multiple cortical areas interact with primary and higher-order specific thalamic nuclei and nonspecific thalamic nuclei to carry out attentive visual learning and information processing. The model simulates how synchronization of neuronal spiking occurs within and across brain regions, and triggers STDP. Matches between bottom-up adaptively filtered input patterns and learned top-down expectations cause gamma oscillations that support attention, resonance, and learning. Mismatches inhibit learning while causing beta oscillations during reset and hypothesis testing operations that are initiated in the deeper cortical layers. The generality of learned recognition codes is controlled by a vigilance process mediated by acetylcholine.
Resumo:
A neural theory is proposed in which visual search is accomplished by perceptual grouping and segregation, which occurs simultaneous across the visual field, and object recognition, which is restricted to a selected region of the field. The theory offers an alternative hypothesis to recently developed variations on Feature Integration Theory (Treisman, and Sato, 1991) and Guided Search Model (Wolfe, Cave, and Franzel, 1989). A neural architecture and search algorithm is specified that quantitatively explains a wide range of psychophysical search data (Wolfe, Cave, and Franzel, 1989; Cohen, and lvry, 1991; Mordkoff, Yantis, and Egeth, 1990; Treisman, and Sato, 1991).
Resumo:
This paper attempts a rational, step-by-step reconstruction of many aspects of the mammalian neural circuitry known to be involved in the spinal cord's regulation of opposing muscles acting on skeletal segments. Mathematical analyses and local circuit simulations based on neural membrane equations are used to clarify the behavioral function of five fundamental cell types, their complex connectivities, and their physiological actions. These cell types are: α-MNs, γ-MNs, IaINs, IbINs, and Renshaw cells. It is shown that many of the complexities of spinal circuitry are necessary to ensure near invariant realization of motor intentions when descending signals of two basic types independently vary over large ranges of magnitude and rate of change. Because these two types of signal afford independent control, or Factorization, of muscle LEngth and muscle TEnsion, our construction was named the FLETE model (Bullock and Grossberg, 1988b, 1989). The present paper significantly extends the range of experimental data encompassed by this evolving model.