12 resultados para Low-Level Laser Therapy

em Boston University Digital Common


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current low-level networking abstractions on modern operating systems are commonly implemented in the kernel to provide sufficient performance for general purpose applications. However, it is desirable for high performance applications to have more control over the networking subsystem to support optimizations for their specific needs. One approach is to allow networking services to be implemented at user-level. Unfortunately, this typically incurs costs due to scheduling overheads and unnecessary data copying via the kernel. In this paper, we describe a method to implement efficient application-specific network service extensions at user-level, that removes the cost of scheduling and provides protected access to lower-level system abstractions. We present a networking implementation that, with minor modifications to the Linux kernel, passes data between "sandboxed" extensions and the Ethernet device without copying or processing in the kernel. Using this mechanism, we put a customizable networking stack into a user-level sandbox and show how it can be used to efficiently process and forward data via proxies, or intermediate hosts, in the communication path of high performance data streams. Unlike other user-level networking implementations, our method makes no special hardware requirements to avoid unnecessary data copies. Results show that we achieve a substantial increase in throughput over comparable user-space methods using our networking stack implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a Delay Tolerant Network (DTN) whose users (nodes) are connected by an underlying Mobile Ad hoc Network (MANET) substrate. Users can declaratively express high-level policy constraints on how "content" should be routed. For example, content may be diverted through an intermediary DTN node for the purposes of preprocessing, authentication, etc. To support such capability, we implement Predicate Routing [7] where high-level constraints of DTN nodes are mapped into low-level routing predicates at the MANET level. Our testbed uses a Linux system architecture and leverages User Mode Linux [2] to emulate every node running a DTN Reference Implementation code [5]. In our initial prototype, we use the On Demand Distance Vector (AODV) MANET routing protocol. We use the network simulator ns-2 (ns-emulation version) to simulate the mobility and wireless connectivity of both DTN and MANET nodes. We show preliminary throughput results showing the efficient and correct operation of propagating routing predicates, and as a side effect, the performance benefit of content re-routing that dynamically (on-demand) breaks the underlying end-to-end TCP connection into shorter-length TCP connections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a Delay Tolerant Network (DTN) whose users (nodes) are connected by an underlying Mobile Ad hoc Network (MANET) substrate. Users can declaratively express high-level policy constraints on how “content” should be routed. For example, content can be directed through an intermediary DTN node for the purposes of preprocessing, authentication, etc., or content from a malicious MANET node can be dropped. To support such content routing at the DTN level, we implement Predicate Routing [1] where high-level constraints of DTN nodes are mapped into low-level routing predicates within the MANET nodes. Our testbed [2] uses a Linux system architecture with User Mode Linux [3] to emulate every DTN node with a DTN Reference Implementation code [4]. In our initial architecture prototype, we use the On Demand Distance Vector (AODV) routing protocol at the MANET level. We use the network simulator ns-2 (ns-emulation version) to simulate the wireless connectivity of both DTN and MANET nodes. Preliminary results show the efficient and correct operation of propagating routing predicates. For the application of content re-routing through an intermediary, as a side effect, results demonstrate the performance benefit of content re-routing that dynamically (on-demand) breaks the underlying end-to-end TCP connections into shorter-length TCP connections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A combined 2D, 3D approach is presented that allows for robust tracking of moving bodies in a given environment as observed via a single, uncalibrated video camera. Tracking is robust even in the presence of occlusions. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that combines low-level (image processing) and mid-level (recursive trajectory estimation) information obtained during the tracking process. The resulting system can segment and maintain the tracking of moving objects before, during, and after occlusion. At each frame, the system also extracts a stabilized coordinate frame of the moving objects. This stabilized frame is used to resize and resample the moving blob so that it can be used as input to motion recognition modules. The approach enables robust tracking without constraining the system to know the shape of the objects being tracked beforehand; although, some assumptions are made about the characteristics of the shape of the objects, and how they evolve with time. Experiments in tracking moving people are described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture of Gaussian classifier. The system was tested in recognizing various dynamic human outdoor activities; e.g., running, walking, roller blading, and cycling. Experiments with synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A combined 2D, 3D approach is presented that allows for robust tracking of moving people and recognition of actions. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object that are then used as input to action recognition modules. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture-of-Gaussians classifier. The system was tested in recognizing various dynamic human outdoor activities: running, walking, roller blading, and cycling. Experiments with real and synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose to investigate a model-based technique for encoding non-rigid object classes in terms of object prototypes. Objects from the same class can be parameterized by identifying shape and appearance invariants of the class to devise low-level representations. The approach presented here creates a flexible model for an object class from a set of prototypes. This model is then used to estimate the parameters of low-level representation of novel objects as combinations of the prototype parameters. Variations in the object shape are modeled as non-rigid deformations. Appearance variations are modeled as intensity variations. In the training phase, the system is presented with several example prototype images. These prototype images are registered to a reference image by a finite element-based technique called Active Blobs. The deformations of the finite element model to register a prototype image with the reference image provide the shape description or shape vector for the prototype. The shape vector for each prototype, is then used to warp the prototype image onto the reference image and obtain the corresponding texture vector. The prototype texture vectors, being warped onto the same reference image have a pixel by pixel correspondence with each other and hence are "shape normalized". Given sufficient number of prototypes that exhibit appropriate in-class variations, the shape and the texture vectors define a linear prototype subspace that spans the object class. Each prototype is a vector in this subspace. The matching phase involves the estimation of a set of combination parameters for synthesis of the novel object by combining the prototype shape and texture vectors. The strengths of this technique lie in the combined estimation of both shape and appearance parameters. This is in contrast with the previous approaches where shape and appearance parameters were estimated separately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel approach for estimating articulated body posture and motion from monocular video sequences is proposed. Human pose is defined as the instantaneous two dimensional configuration (i.e., the projection onto the image plane) of a single articulated body in terms of the position of a predetermined set of joints. First, statistical segmentation of the human bodies from the background is performed and low-level visual features are found given the segmented body shape. The goal is to be able to map these, generally low level, visual features to body configurations. The system estimates different mappings, each one with a specific cluster in the visual feature space. Given a set of body motion sequences for training, unsupervised clustering is obtained via the Expectation Maximation algorithm. Then, for each of the clusters, a function is estimated to build the mapping between low-level features to 3D pose. Currently this mapping is modeled by a neural network. Given new visual features, a mapping from each cluster is performed to yield a set of possible poses. From this set, the system selects the most likely pose given the learned probability distribution and the visual feature similarity between hypothesis and input. Performance of the proposed approach is characterized using a new set of known body postures, showing promising results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A non-linear supervised learning architecture, the Specialized Mapping Architecture (SMA) and its application to articulated body pose reconstruction from single monocular images is described. The architecture is formed by a number of specialized mapping functions, each of them with the purpose of mapping certain portions (connected or not) of the input space, and a feedback matching process. A probabilistic model for the architecture is described along with a mechanism for learning its parameters. The learning problem is approached using a maximum likelihood estimation framework; we present Expectation Maximization (EM) algorithms for two different instances of the likelihood probability. Performance is characterized by estimating human body postures from low level visual features, showing promising results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sensor applications in Sensoria [1] are expressed using STEP (Sensorium Task Execution Plan). SNAFU (Sensor-Net Applications as Functional Units) serves as a high-level sensor-programming language, which is compiled into STEP. In SNAFU’s current form, its differences with STEP are relatively minor, as they are limited to shorthands and macros not available in STEP. We show that, however restrictive it may seem, SNAFU has in fact universal power; technically, it is a Turing-complete language, i.e., any Turing program can be written in SNAFU (though not always conveniently). Although STEP may be allowed to have universal power, as a low-level language not directly available to Sensorium users, SNAFU programmers may use this power for malicious purposes or inadvertently introduce errors with destructive consequences. In future developments of SNAFU, we plan to introduce restrictions and highlevel features with safety guards, such as those provided by a type system, which will make SNAFU programming safer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are of great controversy, largely because such learning can often be attributed to plasticity in later stages of sensory processing or in the decision processes. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity, by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in conjunction with the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show learning for the exposed contrast polarity and that this learning does not transfer to the unexposed contrast polarity. These results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A computational model of visual processing in the vertebrate retina provides a unified explanation of a range of data previously treated by disparate models. Three results are reported here: the model proposes a functional explanation for the primary feed-forward retinal circuit found in vertebrate retinae, it shows how this retinal circuit combines nonlinear adaptation with the desirable properties of linear processing, and it accounts for the origin of parallel transient (nonlinear) and sustained (linear) visual processing streams as simple variants of the same retinal circuit. The retina, owing to its accessibility and to its fundamental role in the initial transduction of light into neural signals, is among the most extensively studied neural structures in the nervous system. Since the pioneering anatomical work by Ramón y Cajal at the turn of the last century[1], technological advances have abetted detailed descriptions of the physiological, pharmacological, and functional properties of many types of retinal cells. However, the relationship between structure and function in the retina is still poorly understood. This article outlines a computational model developed to address fundamental constraints of biological visual systems. Neurons that process nonnegative input signals-such as retinal illuminance-are subject to an inescapable tradeoff between accurate processing in the spatial and temporal domains. Accurate processing in both domains can be achieved with a model that combines nonlinear mechanisms for temporal and spatial adaptation within three layers of feed-forward processing. The resulting architecture is structurally similar to the feed-forward retinal circuit connecting photoreceptors to retinal ganglion cells through bipolar cells. This similarity suggests that the three-layer structure observed in all vertebrate retinae[2] is a required minimal anatomy for accurate spatiotemporal visual processing. This hypothesis is supported through computer simulations showing that the model's output layer accounts for many properties of retinal ganglion cells[3],[4],[5],[6]. Moreover, the model shows how the retina can extend its dynamic range through nonlinear adaptation while exhibiting seemingly linear behavior in response to a variety of spatiotemporal input stimuli. This property is the basis for the prediction that the same retinal circuit can account for both sustained (X) and transient (Y) cat ganglion cells[7] by simple morphological changes. The ability to generate distinct functional behaviors by simple changes in cell morphology suggests that different functional pathways originating in the retina may have evolved from a unified anatomy designed to cope with the constraints of low-level biological vision.