950 resultados para multi-channel processing
Resumo:
OBJECTIVES To establish whether complex signal processing is beneficial for users of bone anchored hearing aids. METHODS Review and analysis of two studies from our own group, each comparing a speech processor with basic digital signal processing (either Baha Divino or Baha Intenso) and a processor with complex digital signal processing (either Baha BP100 or Baha BP110 power). The main differences between basic and complex signal processing are the number of audiologist accessible frequency channels and the availability and complexity of the directional multi-microphone noise reduction and loudness compression systems. RESULTS Both studies show a small, statistically non-significant improvement of speech understanding in quiet with the complex digital signal processing. The average improvement for speech in noise is +0.9 dB, if speech and noise are emitted both from the front of the listener. If noise is emitted from the rear and speech from the front of the listener, the advantage of the devices with complex digital signal processing as opposed to those with basic signal processing increases, on average, to +3.2 dB (range +2.3 … +5.1 dB, p ≤ 0.0032). DISCUSSION Complex digital signal processing does indeed improve speech understanding, especially in noise coming from the rear. This finding has been supported by another study, which has been published recently by a different research group. CONCLUSIONS When compared to basic digital signal processing, complex digital signal processing can increase speech understanding of users of bone anchored hearing aids. The benefit is most significant for speech understanding in noise.
Resumo:
We study the sensitivity of large-scale xenon detectors to low-energy solar neutrinos, to coherent neutrino-nucleus scattering and to neutrinoless double beta decay. As a concrete example, we consider the xenon part of the proposed DARWIN (Dark Matter WIMP Search with Noble Liquids) experiment. We perform detailed Monte Carlo simulations of the expected backgrounds, considering realistic energy resolutions and thresholds in the detector. In a low-energy window of 2–30 keV, where the sensitivity to solar pp and 7Be-neutrinos is highest, an integrated pp-neutrino rate of 5900 events can be reached in a fiducial mass of 14 tons of natural xenon, after 5 years of data. The pp-neutrino flux could thus be measured with a statistical uncertainty around 1%, reaching the precision of solar model predictions. These low-energy solar neutrinos will be the limiting background to the dark matter search channel for WIMP-nucleon cross sections below ~2X 10-48 cm2 and WIMP masses around 50 GeV c 2, for an assumed 99.5% rejection of electronic recoils due to elastic neutrino-electron scatters. Nuclear recoils from coherent scattering of solar neutrinos will limit the sensitivity to WIMP masses below ~6 GeV c-2 to cross sections above ~4X10-45cm2. DARWIN could reach a competitive half-life sensitivity of 5.6X1026 y to the neutrinoless double beta decay of 136Xe after 5 years of data, using 6 tons of natural xenon in the central detector region.
Resumo:
The nematode Caenorhabditis elegans is a well-known model organism used to investigate fundamental questions in biology. Motility assays of this small roundworm are designed to study the relationships between genes and behavior. Commonly, motility analysis is used to classify nematode movements and characterize them quantitatively. Over the past years, C. elegans' motility has been studied across a wide range of environments, including crawling on substrates, swimming in fluids, and locomoting through microfluidic substrates. However, each environment often requires customized image processing tools relying on heuristic parameter tuning. In the present study, we propose a novel Multi-Environment Model Estimation (MEME) framework for automated image segmentation that is versatile across various environments. The MEME platform is constructed around the concept of Mixture of Gaussian (MOG) models, where statistical models for both the background environment and the nematode appearance are explicitly learned and used to accurately segment a target nematode. Our method is designed to simplify the burden often imposed on users; here, only a single image which includes a nematode in its environment must be provided for model learning. In addition, our platform enables the extraction of nematode ‘skeletons’ for straightforward motility quantification. We test our algorithm on various locomotive environments and compare performances with an intensity-based thresholding method. Overall, MEME outperforms the threshold-based approach for the overwhelming majority of cases examined. Ultimately, MEME provides researchers with an attractive platform for C. elegans' segmentation and ‘skeletonizing’ across a wide range of motility assays.
Resumo:
PURPOSE To investigate the feasibility of MR diffusion tensor imaging (DTI) of the median nerve using simultaneous multi-slice echo planar imaging (EPI) with blipped CAIPIRINHA. MATERIALS AND METHODS After federal ethics board approval, MR imaging of the median nerves of eight healthy volunteers (mean age, 29.4 years; range, 25-32) was performed at 3 T using a 16-channel hand/wrist coil. An EPI sequence (b-value, 1,000 s/mm(2); 20 gradient directions) was acquired without acceleration as well as with twofold and threefold slice acceleration. Fractional anisotropy (FA), mean diffusivity (MD) and quality of nerve tractography (number of tracks, average track length, track homogeneity, anatomical accuracy) were compared between the acquisitions using multivariate ANOVA and the Kruskal-Wallis test. RESULTS Acquisition time was 6:08 min for standard DTI, 3:38 min for twofold and 2:31 min for threefold acceleration. No differences were found regarding FA (standard DTI: 0.620 ± 0.058; twofold acceleration: 0.642 ± 0.058; threefold acceleration: 0.644 ± 0.061; p ≥ 0.217) and MD (standard DTI: 1.076 ± 0.080 mm(2)/s; twofold acceleration: 1.016 ± 0.123 mm(2)/s; threefold acceleration: 0.979 ± 0.153 mm(2)/s; p ≥ 0.074). Twofold acceleration yielded similar tractography quality compared to standard DTI (p > 0.05). With threefold acceleration, however, average track length and track homogeneity decreased (p = 0.004-0.021). CONCLUSION Accelerated DTI of the median nerve is feasible. Twofold acceleration yields similar results to standard DTI. KEY POINTS • Standard DTI of the median nerve is limited by its long acquisition time. • Simultaneous multi-slice acquisition is a new technique for accelerated DTI. • Accelerated DTI of the median nerve yields similar results to standard DTI.
Resumo:
A multi-proxy chronological framework along with sequence-stratigraphic interpretations unveils composite Milankovitch cyclicity in the sedimentary records of the Last GlacialeInterglacial cycle at NE Gela Basin on the Sicilian continental margin. Chronostratigraphic data (including foraminifera-based eco-biostratigraphy and d18O records, tephrochronological markers and 14C AMS radiometric datings) was derived from the shallow-shelf drill sites GeoB14403 (54.6 m recovery) and GeoB14414 (27.5 m), collected with both gravity and drilled MeBo cores in 193 m and 146 m water depth, respectively. The recovered intervals record Marine Isotope Stages and Substages (MIS) from MIS 5 to MIS 1, thus comprising major stratigraphic parts of the progradational deposits that form the last 100-ka depositional sequence. Calibration of shelf sedimentary units with borehole stratigraphies indicates the impact of higher-frequency (20-ka) sea level cycles punctuating this 100-ka cycle. This becomes most evident in the alternation of thick interstadial highstand (HST) wedges and thinner glacial forced-regression (FSST) units mirroring seaward shifts in coastal progradation. Albeit their relatively short-lived depositional phase, these subordinate HST units form the bulk of the 100-ka depositional sequence. Two mechanisms are proposed that likely account for enhanced sediment accumulation ratios (SAR) of up to 200 cm/ka during these intervals: (1) intensified activity of deep and intermediate Levantine Intermediate Water (LIW) associated to the drowning of Mediterranean shelves, and (2) amplified sediment flux along the flooded shelf in response to hyperpycnal plumes that generate through extreme precipitation events during overall arid conditions. Equally, the latter mechanism is thought to be at the origin of undulated features resolved in the acoustic records of MIS 5 Interstadials, which bear a striking resemblance to modern equivalents forming on late-Holocene prodeltas of other Mediterranean shallow-shelf settings.
Resumo:
The spatial and temporal dynamics of seagrasses have been studied from the leaf to patch (100 m**2) scales. However, landscape scale (> 100 km**2) seagrass population dynamics are unresolved in seagrass ecology. Previous remote sensing approaches have lacked the temporal or spatial resolution, or ecologically appropriate mapping, to fully address this issue. This paper presents a robust, semi-automated object-based image analysis approach for mapping dominant seagrass species, percentage cover and above ground biomass using a time series of field data and coincident high spatial resolution satellite imagery. The study area was a 142 km**2 shallow, clear water seagrass habitat (the Eastern Banks, Moreton Bay, Australia). Nine data sets acquired between 2004 and 2013 were used to create seagrass species and percentage cover maps through the integration of seagrass photo transect field data, and atmospherically and geometrically corrected high spatial resolution satellite image data (WorldView-2, IKONOS and Quickbird-2) using an object based image analysis approach. Biomass maps were derived using empirical models trained with in-situ above ground biomass data per seagrass species. Maps and summary plots identified inter- and intra-annual variation of seagrass species composition, percentage cover level and above ground biomass. The methods provide a rigorous approach for field and image data collection and pre-processing, a semi-automated approach to extract seagrass species and cover maps and assess accuracy, and the subsequent empirical modelling of seagrass biomass. The resultant maps provide a fundamental data set for understanding landscape scale seagrass dynamics in a shallow water environment. Our findings provide proof of concept for the use of time-series analysis of remotely sensed seagrass products for use in seagrass ecology and management.
Resumo:
The time delay of arrival (TDOA) between multiple microphones has been used since 2006 as a source of information (localization) to complement the spectral features for speaker diarization. In this paper, we propose a new localization feature, the intensity channel contribution (ICC) based on the relative energy of the signal arriving at each channel compared to the sum of the energy of all the channels. We have demonstrated that by joining the ICC features and the TDOA features, the robustness of the localization features is improved and that the diarization error rate (DER) of the complete system (using localization and spectral features) has been reduced. By using this new localization feature, we have been able to achieve a 5.2% DER relative improvement in our development data, a 3.6% DER relative improvement in the RT07 evaluation data and a 7.9% DER relative improvement in the last year's RT09 evaluation data.
Resumo:
In this paper we present an adaptive multi-camera system for real time object detection able to efficiently adjust the computational requirements of video processing blocks to the available processing power and the activity of the scene. The system is based on a two level adaptation strategy that works at local and at global level. Object detection is based on a Gaussian mixtures model background subtraction algorithm. Results show that the system can efficiently adapt the algorithm parameters without a significant loss in the detection accuracy.
Resumo:
In the field of detection and monitoring of dynamic objects in quasi-static scenes, background subtraction techniques where background is modeled at pixel-level, although showing very significant limitations, are extensively used. In this work we propose a novel approach to background modeling that operates at region-level in a wavelet based multi-resolution framework. Based on a segmentation of the background, characterization is made for each region independently as a mixture of K Gaussian modes, considering the model of the approximation and detail coefficients at the different wavelet decomposition levels. Background region characterization is updated along time, and the detection of elements of interest is carried out computing the distance between background region models and those of each incoming image in the sequence. The inclusion of the context in the modeling scheme through each region characterization makes the model robust, being able to support not only gradual illumination and long-term changes, but also sudden illumination changes and the presence of strong shadows in the scene
Resumo:
We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder.
Resumo:
Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.
Resumo:
These slides present several 3-D reconstruction methods to obtain the geometric structure of a scene that is viewed by multiple cameras. We focus on the combination of the geometric modeling in the image formation process with the use of standard optimization tools to estimate the characteristic parameters that describe the geometry of the 3-D scene. In particular, linear, non-linear and robust methods to estimate the monocular and epipolar geometry are introduced as cornerstones to generate 3-D reconstructions with multiple cameras. Some examples of systems that use this constructive strategy are Bundler, PhotoSynth, VideoSurfing, etc., which are able to obtain 3-D reconstructions with several hundreds or thousands of cameras. En esta presentación se tratan varios métodos de reconstrucción 3-D para la obtención de la estructura geométrica de una escena que es visualizada por varias cámaras. Se enfatiza la combinación de modelado geométrico del proceso de formación de la imagen con el uso de herramientas estándar de optimización para estimar los parámetros característicos que describen la geometría de la escena 3-D. En concreto, se presentan métodos de estimación lineales, no lineales y robustos de las geometrías monocular y epipolar como punto de partida para generar reconstrucciones con tres o más cámaras. Algunos ejemplos de sistemas que utilizan este enfoque constructivo son Bundler, PhotoSynth, VideoSurfing, etc., los cuales, en la práctica pueden llegar a reconstruir una escena con varios cientos o miles de cámaras.
Resumo:
One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications.
Resumo:
This paper addresses the question of maximizing classifier accuracy for classifying task-related mental activity from Magnetoencelophalography (MEG) data. We propose the use of different sources of information and introduce an automatic channel selection procedure. To determine an informative set of channels, our approach combines a variety of machine learning algorithms: feature subset selection methods, classifiers based on regularized logistic regression, information fusion, and multiobjective optimization based on probabilistic modeling of the search space. The experimental results show that our proposal is able to improve classification accuracy compared to approaches whose classifiers use only one type of MEG information or for which the set of channels is fixed a priori.