43 resultados para Compressed Sensing, Analog-to-Information Conversion, Signal Processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanisms of long-term adaptation to low oxygen environment are quite well studied, but little is known about the sensing of oxygen shortage, the signal transduction and the short-term effects of hypoxia in plant cells. We have found that an RNA helicase eIF4A-III, a putative component of the Exon Junction Complex, rapidly changes its pattern of localisation in the plant nucleus under hypoxic conditions. In normal cell growth conditions GFP- eIF4A-III was mainly nucleoplasmic, but in hypoxia stress conditions it moved to the nucleolus and splicing speckles. This transition occurred within 15-20 min in Arabidopsis culture cells and seedling root cells, but took more than 2 h in tobacco BY-2 culture cells. Inhibition of respiration, transcription or phosphorylation in cells and ethanol treatment had similar effects to hypoxia. The most likely consequence is that a certain mRNA population will remain bound to the eIF4A-III and other mRNA processing proteins, rather than being transported from the nucleus to the cytoplasm, and thus its translation will be suspended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of the morphodynamics of tidal channel networks is important because of their role in tidal propagation and the evolution of salt-marshes and tidal flats. Channel dimensions range from tens of metres wide and metres deep near the low water mark to only 20-30cm wide and 20cm deep for the smallest channels on the marshes. The conventional method of measuring the networks is cumbersome, involving manual digitising of aerial photographs. This paper describes a semi-automatic knowledge-based network extraction method that is being implemented to work using airborne scanning laser altimetry (and later aerial photography). The channels exhibit a width variation of several orders of magnitude, making an approach based on multi-scale line detection difficult. The processing therefore uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels using a distance-with-destination transform. Breaks in the networks are repaired by extending channel ends in the direction of their ends to join with nearby channels, using domain knowledge that flow paths should proceed downhill and that any network fragment should be joined to a nearby fragment so as to connect eventually to the open sea.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of increased dietary intakes of alpha-linolenic acid (ALNA) or eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) for 2 months upon plasma lipid composition and capacity for conversion of ALNA to longer-chain metabolites was investigated in healthy men (52 (SD 12) years). After a 4-week baseline period when the subjects substituted a control spread, a test meal containing [U-C-13]ALNA (700 mg) was consumed to measure conversion to EPA, docosapentaenoic acid (DPA) and DHA over 48 h. Subjects were then randomised to one of three groups for 8 weeks before repeating the tracer study: (1) continued on same intake (control, n 5); (2) increased ALNA intake (10 g/d, n 4); (3) increased EPA+DHA intake (1.5 g/d, n 5). At baseline, apparent fractional conversion of labelled ALNA was: EPA 2.80, DPA 1.20 and DRA 0.04%. After 8 weeks on the control diet, plasma lipid composition and [C-13]ALNA conversion remained unchanged compared with baseline. The high-ALNA diet resulted in raised plasma triacylglycerol-EPA and -DPA concentrations and phosphatidylcholine-EPA concentration, whilst [C-13]ALNA conversion was similar to baseline. The high-(EPA+DHA) diet raised plasma phosphatidylcholine-EPA and -DHA concentrations, decreased [C-13]ALNA conversion to EPA (2-fold) and DPA (4-fold), whilst [C-13]ALNA conversion to DHA was unchanged. The dietary interventions did not alter partitioning of ALNA towards beta-oxidation. The present results indicate ALNA conversion was down-regulated by increased product (EPA+DHA) availability, but was not up-regulated by increased substrate (ALNA) consumption. This suggests regulation of ALNA conversion may limit the influence of variations in dietary n-3 fatty acid intake on plasma lipid compositions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four groups of second language (L2) learners of English from different language backgrounds (Chinese, Japanese, German, and Greek) and a group of native speaker controls participated in an online reading time experiment with sentences involving long-distance whdependencies. Although the native speakers showed evidence of making use of intermediate syntactic gaps during processing, the L2 learners appeared to associate the fronted wh-phrase directly with its lexical subcategorizer, regardless of whether the subjacency constraint was operative in their native language. This finding is argued to support the hypothesis that nonnative comprehenders underuse syntactic information in L2 processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work the G(A)(0) distribution is assumed as the universal model for amplitude Synthetic Aperture (SAR) imagery data under the Multiplicative Model. The observed data, therefore, is assumed to obey a G(A)(0) (alpha; gamma, n) law, where the parameter n is related to the speckle noise, and (alpha, gamma) are related to the ground truth, giving information about the background. Therefore, maps generated by the estimation of (alpha, gamma) in each coordinate can be used as the input for classification methods. Maximum likelihood estimators are derived and used to form estimated parameter maps. This estimation can be hampered by the presence of corner reflectors, man-made objects used to calibrate SAR images that produce large return values. In order to alleviate this contamination, robust (M) estimators are also derived for the universal model. Gaussian Maximum Likelihood classification is used to obtain maps using hard-to-deal-with simulated data, and the superiority of robust estimation is quantitatively assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA., Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance or the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A basic principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: the underlying data generating mechanism exhibits known symmetric property and the underlying process obeys a set of given boundary value constraints. The class of orthogonal least squares regression algorithms can readily be applied to construct parsimonious grey-box RBF models with enhanced generalisation capability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a procedure for filtering electromyographic (EMG) signals. Its key element is the Empirical Mode Decomposition, a novel digital signal processing technique that can decompose my time-series into a set of functions designated as intrinsic mode functions. The procedure for EMG signal filtering is compared to a related approach based on the wavelet transform. Results obtained from the analysis of synthetic and experimental EMG signals show that Our method can be Successfully and easily applied in practice to attenuation of background activity in EMG signals. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tremor is a clinical feature characterized by oscillations of a part of the body. The detection and study of tremor is an important step in investigations seeking to explain underlying control strategies of the central nervous system under natural (or physiological) and pathological conditions. It is well established that tremorous activity is composed of deterministic and stochastic components. For this reason, the use of digital signal processing techniques (DSP) which take into account the nonlinearity and nonstationarity of such signals may bring new information into the signal analysis which is often obscured by traditional linear techniques (e.g. Fourier analysis). In this context, this paper introduces the application of the empirical mode decomposition (EMD) and Hilbert spectrum (HS), which are relatively new DSP techniques for the analysis of nonlinear and nonstationary time-series, for the study of tremor. Our results, obtained from the analysis of experimental signals collected from 31 patients with different neurological conditions, showed that the EMD could automatically decompose acquired signals into basic components, called intrinsic mode functions (IMFs), representing tremorous and voluntary activity. The identification of a physical meaning for IMFs in the context of tremor analysis suggests an alternative and new way of detecting tremorous activity. These results may be relevant for those applications requiring automatic detection of tremor. Furthermore, the energy of IMFs was visualized as a function of time and frequency by means of the HS. This analysis showed that the variation of energy of tremorous and voluntary activity could be distinguished and characterized on the HS. Such results may be relevant for those applications aiming to identify neurological disorders. In general, both the HS and EMD demonstrated to be very useful to perform objective analysis of any kind of tremor and can therefore be potentially used to perform functional assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper specifically examines the implantation of a microelectrode array into the median nerve of the left arm of a healthy male volunteer. The objective was to establish a bi-directional link between the human nervous system and a computer, via a unique interface module. This is the first time that such a device has been used with a healthy human. The aim of the study was to assess the efficacy, compatibility, and long term operability of the neural implant in allowing the subject to perceive feedback stimulation and for neural activity to be detected and processed such that the subject could interact with remote technologies. A case study demonstrating real-time control of an instrumented prosthetic hand by means of the bi-directional link is given. The implantation did not result in infection, and scanning electron microscope images of the implant post extraction have not indicated significant rejection of the implant by the body. No perceivable loss of hand sensation or motion control was experienced by the subject while the implant was in place, and further testing of the subject following the removal of the implant has not indicated any measurable long term defects. The implant was extracted after 96 days. Copyright © 2004 John Wiley & Sons, Ltd.