828 resultados para Blind equalisers
Resumo:
Blind Deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Many successful image priors enforce the sparsity of the sharp image gradients. Ideally the L0 “norm” is the best choice for promoting sparsity, but because it is computationally intractable, some methods have used a logarithmic approximation. In this work we also study a logarithmic image prior. We show empirically how well the prior suits the blind deconvolution problem. Our analysis confirms experimentally the hypothesis that a prior should not necessarily model natural image statistics to correctly estimate the blur kernel. Furthermore, we show that a simple Maximum a Posteriori formulation is enough to achieve state of the art results. To minimize such formulation we devise two iterative minimization algorithms that cope with the non-convexity of the logarithmic prior: one obtained via the primal-dual approach and one via majorization-minimization.
Resumo:
In this paper we propose a solution to blind deconvolution of a scene with two layers (foreground/background). We show that the reconstruction of the support of these two layers from a single image of a conventional camera is not possible. As a solution we propose to use a light field camera. We demonstrate that a single light field image captured with a Lytro camera can be successfully deblurred. More specifically, we consider the case of space-varying motion blur, where the blur magnitude depends on the depth changes in the scene. Our method employs a layered model that handles occlusions and partial transparencies due to both motion blur and out of focus blur of the plenoptic camera. We reconstruct each layer support, the corresponding sharp textures, and motion blurs via an optimization scheme. The performance of our algorithm is demonstrated on synthetic as well as real light field images.
Resumo:
The magnetoencephalogram (MEG) is contaminated with undesired signals, which are called artifacts. Some of the most important ones are the cardiac and the ocular artifacts (CA and OA, respectively), and the power line noise (PLN). Blind source separation (BSS) has been used to reduce the influence of the artifacts in the data. There is a plethora of BSS-based artifact removal approaches, but few comparative analyses. In this study, MEG background activity from 26 subjects was processed with five widespread BSS (AMUSE, SOBI, JADE, extended Infomax, and FastICA) and one constrained BSS (cBSS) techniques. Then, the ability of several combinations of BSS algorithm, epoch length, and artifact detection metric to automatically reduce the CA, OA, and PLN were quantified with objective criteria. The results pinpointed to cBSS as a very suitable approach to remove the CA. Additionally, a combination of AMUSE or SOBI and artifact detection metrics based on entropy or power criteria decreased the OA. Finally, the PLN was reduced by means of a spectral metric. These findings confirm the utility of BSS to help in the artifact removal for MEG background activity.
Resumo:
Information reconciliation is a crucial procedure in the classical post-processing of quantum key distribution (QKD). Poor reconciliation e?ciency, revealing more information than strictly needed, may compromise the maximum attainable distance, while poor performance of the algorithm limits the practical throughput in a QKD device. Historically, reconciliation has been mainly done using close to minimal information disclosure but heavily interactive procedures, like Cascade, or using less e?cient but also less interactive ?just one message is exchanged? procedures, like the ones based in low-density parity-check (LDPC) codes. The price to pay in the LDPC case is that good e?ciency is only attained for very long codes and in a very narrow range centered around the quantum bit error rate (QBER) that the code was designed to reconcile, thus forcing to have several codes if a broad range of QBER needs to be catered for. Real world implementations of these methods are thus very demanding, either on computational or communication resources or both, to the extent that the last generation of GHz clocked QKD systems are ?nding a bottleneck in the classical part. In order to produce compact, high performance and reliable QKD systems it would be highly desirable to remove these problems. Here we analyse the use of short-length LDPC codes in the information reconciliation context using a low interactivity, blind, protocol that avoids an a priori error rate estimation. We demonstrate that 2×103 bits length LDPC codes are suitable for blind reconciliation. Such codes are of high interest in practice, since they can be used for hardware implementations with very high throughput.
Resumo:
We demonstrate a simple self-referenced single-shot method for simultaneously measuring two different arbitrary pulses, which can potentially be complex and also have very different wavelengths. The method is a variation of cross-correlation frequency-resolved optical gating (XFROG) that we call double-blind (DB) FROG. It involves measuring two spectrograms, both of which are obtained simultaneously in a single apparatus. DB FROG retrieves both pulses robustly by using the standard XFROG algorithm, implemented alternately on each of the traces, taking one pulse to be ?known? and solving for the other. We show both numerically and experimentally that DB FROG using a polarization-gating beam geometry works reliably and appears to have no nontrivial ambiguities.
Resumo:
In this paper we propose a novel fast random search clustering (RSC) algorithm for mixing matrix identification in multiple input multiple output (MIMO) linear blind inverse problems with sparse inputs. The proposed approach is based on the clustering of the observations around the directions given by the columns of the mixing matrix that occurs typically for sparse inputs. Exploiting this fact, the RSC algorithm proceeds by parameterizing the mixing matrix using hyperspherical coordinates, randomly selecting candidate basis vectors (i.e. clustering directions) from the observations, and accepting or rejecting them according to a binary hypothesis test based on the Neyman–Pearson criterion. The RSC algorithm is not tailored to any specific distribution for the sources, can deal with an arbitrary number of inputs and outputs (thus solving the difficult under-determined problem), and is applicable to both instantaneous and convolutive mixtures. Extensive simulations for synthetic and real data with different number of inputs and outputs, data size, sparsity factors of the inputs and signal to noise ratios confirm the good performance of the proposed approach under moderate/high signal to noise ratios. RESUMEN. Método de separación ciega de fuentes para señales dispersas basado en la identificación de la matriz de mezcla mediante técnicas de "clustering" aleatorio.
Resumo:
Acknowledgments We thank the members of the Trial Steering and Data Monitoring Committee and all the people who helped in the conduct of the study (including the OPPTIMUM collaborative group and other clinicians listed in the appendix). We are grateful to Paul Piette (Besins Healthcare Corporate, Brussels, Belgium) and Besins Healthcare for their kind donation of active and placebo drug for use in the study, and to staff of the pharmacy and research and development departments of the participating hospitals. We are also grateful to the many people who helped in this study but who we have been unable to name, and in particular all the women (and their babies) who participated in OPPTIMUM. OPPTIMUM was funded by the Efficacy and Mechanism Evaluation (EME) Programme, a Medical Research Council (MRC) and National Institute of Health Research (NIHR) partnership, award number G0700452, revised to 09/800/27. The EME Programme is funded by the MRC and NIHR, with contributions from the Chief Scientist Office in Scotland and National Institute for Social Care and Research in Wales. The views expressed in this publication are those of the author(s) and not necessarily those of the MRC, National Health Service, NIHR, or the Department of Health. The funder had no involvement in data collection, analysis or interpretation, and no role in the writing of this manuscript or the decision to submit for publication.
Resumo:
Averaged event-related potential (ERP) data recorded from the human scalp reveal electroencephalographic (EEG) activity that is reliably time-locked and phase-locked to experimental events. We report here the application of a method based on information theory that decomposes one or more ERPs recorded at multiple scalp sensors into a sum of components with fixed scalp distributions and sparsely activated, maximally independent time courses. Independent component analysis (ICA) decomposes ERP data into a number of components equal to the number of sensors. The derived components have distinct but not necessarily orthogonal scalp projections. Unlike dipole-fitting methods, the algorithm does not model the locations of their generators in the head. Unlike methods that remove second-order correlations, such as principal component analysis (PCA), ICA also minimizes higher-order dependencies. Applied to detected—and undetected—target ERPs from an auditory vigilance experiment, the algorithm derived ten components that decomposed each of the major response peaks into one or more ICA components with relatively simple scalp distributions. Three of these components were active only when the subject detected the targets, three other components only when the target went undetected, and one in both cases. Three additional components accounted for the steady-state brain response to a 39-Hz background click train. Major features of the decomposition proved robust across sessions and changes in sensor number and placement. This method of ERP analysis can be used to compare responses from multiple stimuli, task conditions, and subject states.
Resumo:
We have studied patient PB, who, after an electric shock that led to vascular insufficiency, became virtually blind, although he retained a capacity to see colors consciously. For our psychophysical studies, we used a simplified version of the Land experiments [Land, E. (1974) Proc. R. Inst. G. B. 47, 23–58] to learn whether color constancy mechanisms are intact in him, which amounts to learning whether he can assign a constant color to a surface in spite of changes in the precise wavelength composition of the light reflected from that surface. We supplemented our psychophysical studies with imaging ones, using functional magnetic resonance, to learn something about the location of areas that are active in his brain when he perceives colors. The psychophysical results suggested that color constancy mechanisms are severely defective in PB and that his color vision is wavelength-based. The imaging results showed that, when he viewed and recognized colors, significant increases in activity were restricted mainly to V1-V2. We conclude that a partly defective color system operating on its own in a severely damaged brain is able to mediate a conscious experience of color in the virtually total absence of other visual abilities.