897 resultados para spatio-temporal variation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Late Mesozoic-Cenozoic volcanic rocks are well exposed in Lhasa Terrane, southern Tibet. This research attempts to apply 40Ar/39Ar geochronology, major, trace element and Sr-Nd-O isotopic geochemistry data to constrain the spatio-temporal variations, the composition of source, geodynamic setting. The results indicate that Lhasa Terrane mainly went through three tectonic-magmatic cycle: (1) Phase of Oceanic subduction (140-80Ma). Along with the subducting beneath the Eurasian Plate of Neo-Tethys slab, the oceanic sediment and/or the subducting slab released fluids/melts to metasomatize the subcontinental lithospheric mantle, and induced the mantle wedge partially melt and produced the calc-alkaline continental arc volcanic rocks; (2) Phase of continental-continental collision. Following the subducting of the Neo-Tethys slab, the Indian Plate collided with the Eurasian Plate dragged by the dense Neo-Tethys oceanic lithosphere. The oceanic lithosphere detached from continental lithosphere during roll-back and break-off and the asthenosphere upwelled. The resulting conducted thermal perturbation leads to the melting of the overriding mantle lithosphere and produced the syn-collisional magmatism: the Linzizong Formation and dykes; (3) Following by the detachment of the Tethys oceanic lithosphere, the Indian Lithosphere subducted northward by the drive from the expanding of Indian Ocean. The dense Indian continental lithospheric mantle (±the thickened lower crust) break off, disturb the asthenosphere, and lead to the melting of the overriding mantle lithosphere, which has been metasomatized by the melts/fluids from the subducting oceanic/continental lithosphere and the asthenosphere, and produced the rift-related ultrapotassic rocks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A typical robot vision scenario might involve a vehicle moving with an unknown 3D motion (translation and rotation) while taking intensity images of an arbitrary environment. This paper describes the theory and implementation issues of tracking any desired point in the environment. This method is performed completely in software without any need to mechanically move the camera relative to the vehicle. This tracking technique is simple an inexpensive. Furthermore, it does not use either optical flow or feature correspondence. Instead, the spatio-temporal gradients of the input intensity images are used directly. The experimental results presented support the idea of tracking in software. The final result is a sequence of tracked images where the desired point is kept stationary in the images independent of the nature of the relative motion. Finally, the quality of these tracked images are examined using spatio-temporal gradient maps.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis describes an investigation of retinal directional selectivity. We show intracellular (whole-cell patch) recordings in turtle retina which indicate that this computation occurs prior to the ganglion cell, and we describe a pre-ganglionic circuit model to account for this and other findings which places the non-linear spatio-temporal filter at individual, oriented amacrine cell dendrites. The key non-linearity is provided by interactions between excitatory and inhibitory synaptic inputs onto the dendrites, and their distal tips provide directionally selective excitatory outputs onto ganglion cells. Detailed simulations of putative cells support this model, given reasonable parameter constraints. The performance of the model also suggests that this computational substructure may be relevant within the dendritic trees of CNS neurons in general.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis explores how to represent image texture in order to obtain information about the geometry and structure of surfaces, with particular emphasis on locating surface discontinuities. Theoretical and psychophysical results lead to the following conclusions for the representation of image texture: (1) A texture edge primitive is needed to identify texture change contours, which are formed by an abrupt change in the 2-D organization of similar items in an image. The texture edge can be used for locating discontinuities in surface structure and surface geometry and for establishing motion correspondence. (2) Abrupt changes in attributes that vary with changing surface geometry ??ientation, density, length, and width ??ould be used to identify discontinuities in surface geometry and surface structure. (3) Texture tokens are needed to separate the effects of different physical processes operating on a surface. They represent the local structure of the image texture. Their spatial variation can be used in the detection of texture discontinuities and texture gradients, and their temporal variation may be used for establishing motion correspondence. What precisely constitutes the texture tokens is unknown; it appears, however, that the intensity changes alone will not suffice, but local groupings of them may. (4) The above primitives need to be assigned rapidly over a large range in an image.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wydział Matematyki i Informatyki: Zakład Lingwistyki Informatycznej i Sztucznej Inteligencji

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a mobile sensor network monitoring a spatio-temporal field. Given limited cache sizes at the sensor nodes, the goal is to develop a distributed cache management algorithm to efficiently answer queries with a known probability distribution over the spatial dimension. First, we propose a novel distributed information theoretic approach in which the nodes locally update their caches based on full knowledge of the space-time distribution of the monitored phenomenon. At each time instant, local decisions are made at the mobile nodes concerning which samples to keep and whether or not a new sample should be acquired at the current location. These decisions account for minimizing an entropic utility function that captures the average amount of uncertainty in queries given the probability distribution of query locations. Second, we propose a different correlation-based technique, which only requires knowledge of the second-order statistics, thus relaxing the stringent constraint of having a priori knowledge of the query distribution, while significantly reducing the computational overhead. It is shown that the proposed approaches considerably improve the average field estimation error by maintaining efficient cache content. It is further shown that the correlation-based technique is robust to model mismatch in case of imperfect knowledge of the underlying generative correlation structure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Commonly, research work in routing for delay tolerant networks (DTN) assumes that node encounters are predestined, in the sense that they are the result of unknown, exogenous processes that control the mobility of these nodes. In this paper, we argue that for many applications such an assumption is too restrictive: while the spatio-temporal coordinates of the start and end points of a node's journey are determined by exogenous processes, the specific path that a node may take in space-time, and hence the set of nodes it may encounter could be controlled in such a way so as to improve the performance of DTN routing. To that end, we consider a setting in which each mobile node is governed by a schedule consisting of a ist of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged for DTN message delivery purposes. We define the Mobility Coordination Problem (MCP) for DTNs as follows: Given a set of nodes, each with its own schedule, and a set of messages to be exchanged between these nodes, devise a set of node encounters that minimize message delivery delays while satisfying all node schedules. The MCP for DTNs is general enough that it allows us to model and evaluate some of the existing DTN schemes, including data mules and message ferries. In this paper, we show that MCP for DTNs is NP-hard and propose two detour-based approaches to solve the problem. The first (DMD) is a centralized heuristic that leverages knowledge of the message workload to suggest specific detours to optimize message delivery. The second (DNE) is a distributed heuristic that is oblivious to the message workload, and which selects detours so as to maximize node encounters. We evaluate the performance of these detour-based approaches using extensive simulations based on synthetic workloads as well as real schedules obtained from taxi logs in a major metropolitan area. Our evaluation shows that our centralized, workload-aware DMD approach yields the best performance, in terms of message delay and delivery success ratio, and that our distributed, workload-oblivious DNE approach yields favorable performance when compared to approaches that require the use of data mules and message ferries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spotting patterns of interest in an input signal is a very useful task in many different fields including medicine, bioinformatics, economics, speech recognition and computer vision. Example instances of this problem include spotting an object of interest in an image (e.g., a tumor), a pattern of interest in a time-varying signal (e.g., audio analysis), or an object of interest moving in a specific way (e.g., a human's body gesture). Traditional spotting methods, which are based on Dynamic Time Warping or hidden Markov models, use some variant of dynamic programming to register the pattern and the input while accounting for temporal variation between them. At the same time, those methods often suffer from several shortcomings: they may give meaningless solutions when input observations are unreliable or ambiguous, they require a high complexity search across the whole input signal, and they may give incorrect solutions if some patterns appear as smaller parts within other patterns. In this thesis, we develop a framework that addresses these three problems, and evaluate the framework's performance in spotting and recognizing hand gestures in video. The first contribution is a spatiotemporal matching algorithm that extends the dynamic programming formulation to accommodate multiple candidate hand detections in every video frame. The algorithm finds the best alignment between the gesture model and the input, and simultaneously locates the best candidate hand detection in every frame. This allows for a gesture to be recognized even when the hand location is highly ambiguous. The second contribution is a pruning method that uses model-specific classifiers to reject dynamic programming hypotheses with a poor match between the input and model. Pruning improves the efficiency of the spatiotemporal matching algorithm, and in some cases may improve the recognition accuracy. The pruning classifiers are learned from training data, and cross-validation is used to reduce the chance of overpruning. The third contribution is a subgesture reasoning process that models the fact that some gesture models can falsely match parts of other, longer gestures. By integrating subgesture reasoning the spotting algorithm can avoid the premature detection of a subgesture when the longer gesture is actually being performed. Subgesture relations between pairs of gestures are automatically learned from training data. The performance of the approach is evaluated on two challenging video datasets: hand-signed digits gestured by users wearing short sleeved shirts, in front of a cluttered background, and American Sign Language (ASL) utterances gestured by ASL native signers. The experiments demonstrate that the proposed method is more accurate and efficient than competing approaches. The proposed approach can be generally applied to alignment or search problems with multiple input observations, that use dynamic programming to find a solution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The proposed model, called the combinatorial and competitive spatio-temporal memory or CCSTM, provides an elegant solution to the general problem of having to store and recall spatio-temporal patterns in which states or sequences of states can recur in various contexts. For example, fig. 1 shows two state sequences that have a common subsequence, C and D. The CCSTM assumes that any state has a distributed representation as a collection of features. Each feature has an associated competitive module (CM) containing K cells. On any given occurrence of a particular feature, A, exactly one of the cells in CMA will be chosen to represent it. It is the particular set of cells active on the previous time step that determines which cells are chosen to represent instances of their associated features on the current time step. If we assume that typically S features are active in any state then any state has K^S different neural representations. This huge space of possible neural representations of any state is what underlies the model's ability to store and recall numerous context-sensitive state sequences. The purpose of this paper is simply to describe this mechanism.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new neural network architecture is introduced for the recognition of pattern classes after supervised and unsupervised learning. Applications include spatio-temporal image understanding and prediction and 3-D object recognition from a series of ambiguous 2-D views. The architecture, called ART-EMAP, achieves a synthesis of adaptive resonance theory (ART) and spatial and temporal evidence integration for dynamic predictive mapping (EMAP). ART-EMAP extends the capabilities of fuzzy ARTMAP in four incremental stages. Stage 1 introduces distributed pattern representation at a view category field. Stage 2 adds a decision criterion to the mapping between view and object categories, delaying identification of ambiguous objects when faced with a low confidence prediction. Stage 3 augments the system with a field where evidence accumulates in medium-term memory (MTM). Stage 4 adds an unsupervised learning process to fine-tune performance after the limited initial period of supervised network training. Each ART-EMAP stage is illustrated with a benchmark simulation example, using both noisy and noise-free data. A concluding set of simulations demonstrate ART-EMAP performance on a difficult 3-D object recognition problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Four pigs, three with focal infarctions in the apical intraventricular septum (IVS) and/or left ventricular free wall (LVFW), were imaged with an intracardiac echocardiography (ICE) transducer. Custom beam sequences were used to excite the myocardium with focused acoustic radiation force (ARF) impulses and image the subsequent tissue response. Tissue displacement in response to the ARF excitation was calculated with a phase-based estimator, and transverse wave magnitude and velocity were each estimated at every depth. The excitation sequence was repeated rapidly, either in the same location to generate 40 Hz M-modes at a single steering angle, or with a modulated steering angle to synthesize 2-D displacement magnitude and shear wave velocity images at 17 points in the cardiac cycle. Both types of images were acquired from various views in the right and left ventricles, in and out of infarcted regions. In all animals, acoustic radiation force impulse (ARFI) and shear wave elasticity imaging (SWEI) estimates indicated diastolic relaxation and systolic contraction in noninfarcted tissues. The M-mode sequences showed high beat-to-beat spatio-temporal repeatability of the measurements for each imaging plane. In views of noninfarcted tissue in the diseased animals, no significant elastic remodeling was indicated when compared with the control. Where available, views of infarcted tissue were compared with similar views from the control animal. In views of the LVFW, the infarcted tissue presented as stiff and non-contractile compared with the control. In a view of the IVS, no significant difference was seen between infarcted and healthy tissue, whereas in another view, a heterogeneous infarction was seen to be presenting itself as non-contractile in systole.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D+dual energy+time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. METHODS: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. RESULTS: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. CONCLUSIONS: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The introduction of non-indigenous marine plankton species can have a considerable ecological and economic effect on regional systems. Their presence, however, can go unnoticed until they reach nuisance status and as a consequence few case histories exist containing information on their initial appearance and their spatio-temporal patterns. Here we report on the occurrence of the non-indigenous diatom Coscinodiscus wailesii in 1977 in the English Channel, its subsequent geographical spread into European shelf seas, and its persistence as a significant member of the diatom community in the north-east Atlantic from 1977-1995.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Processes of enrichment, concentration and retention are thought to be important for the successful recruitment of small pelagic fish in upwelling areas, but are difficult to measure. In this study, a novel approach is used to examine the role of spatio-temporal oceanographic variability on recruitment success of the Northern Benguela sardine Sardinops sagax. This approach applies a neural network pattern recognition technique, called a self-organising map (SOM), to a seven-year time series of satellite-derived sea level data. The Northern Benguela is characterised by quasi-perennial upwelling of cold, nutrient-rich water and is influenced by intrusions of warm, nutrient-poor Angola Current water from the north. In this paper, these processes are categorised in terms of their influence on recruitment success through the key ocean triad mechanisms of enrichment, concentration and retention. Moderate upwelling is seen as favourable for recruitment, whereas strong upwelling, weak upwelling and Angola Current intrusion appear detrimental to recruitment success. The SOM was used to identify characteristic patterns from sea level difference data and these were interpreted with the aid of sea surface temperature data. We found that the major oceanographic processes of upwelling and Angola Current intrusion dominated these patterns, allowing them to be partitioned into those representing recruitment favourable conditions and those representing adverse conditions for recruitment. A marginally significant relationship was found between the index of sardine recruitment and the frequency of recruitment favourable conditions (r super(2) = 0.61, p = 0.068, n = 6). Because larvae are vulnerable to environmental influences for a period of at least 50 days after spawning, the SOM was then used to identify windows of persistent favourable conditions lasting longer than 50 days, termed recruitment favourable periods (RFPs). The occurrence of RFPs was compared with back-calculated spawning dates for each cohort. Finally, a comparison of RFPs with the time of spawning and the index of recruitment showed that in years where there were 50 or more days of favourable conditions following spawning, good recruitment followed (Mann-Whitney U-test: p = 0.064, n = 6). These results show the value of the SOM technique for describing spatio-temporal variability in oceanographic processes. Variability in these processes appears to be an important factor influencing recruitment in the Northern Benguela sardine, although the available data time series is currently too short to be conclusive. Nonetheless, the analysis of satellite data, using a neural network pattern-recognition approach, provides a useful framework for investigating fisheries recruitment problems.