830 resultados para multiresolution filtering
Resumo:
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flow computation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we de- velop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional mul- tilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrec- tional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimiza- tion search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow com- putation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
En aquest Treball de Final de Grau s’exposen els resultats de l’anàlisi de les dades genètiques del projecte EurGast2 "Genetic susceptibility, environmental exposure and gastric cancer risk in an European population”, estudi cas‐control niat a la cohort europea EPIC “European Prospective lnvestigation into Cancer and Nutrition”, que té per objectiu l’estudi dels factors genètics i ambientals associats amb el risc de desenvolupar càncer gàstric (CG). A partir de les dades resultants de l’estudi EurGast2, en el què es van analitzar 1.294 SNPs en 365 casos de càncer gàstric i 1.284 controls en l’anàlisi Single SNP previ, la hipòtesi de partida del present Treball de Final de Grau és que algunes variants amb un efecte marginal molt feble, però que conjuntament amb altres variants estarien associades al risc de CG, podrien no haver‐se detectat. Així doncs, l’objectiu principal del projecte és la identificació d’interaccions de segon ordre entre variants genètiques de gens candidats implicades en la carcinogènesi de càncer gàstric. L’anàlisi de les interaccions s’ha dut a terme aplicant el mètode estadístic Model‐based Multifactor Dimensionality Reduction Method (MB‐MDR), desenvolupat per Calle et al. l’any 2008 i s’han aplicat dues metodologies de filtratge per seleccionar les interaccions que s’exploraran: 1) filtratge d’interaccions amb un SNP significatiu en el Single SNP analysis i 2) filtratge d’interaccions segons la mesura Sinèrgia. Els resultats del projecte han identificat 5 interaccions de segon ordre entre SNPs associades significativament amb un major risc de desenvolupar càncer gàstric, amb p‐valor inferior a 10‐4. Les interaccions identificades corresponen a interaccions entre els gens MPO i CDH1, XRCC1 i GAS6, ADH1B i NR5A2 i IL4R i IL1RN (que s’ha validat en les dues metodologies de filtratge). Excepte CDH1, cap altre d’aquests gens s’havia associat significativament amb el CG o prioritzat en les anàlisis prèvies, el que confirma l’interès d’analitzar les interaccions genètiques de segon ordre. Aquestes poden ser un punt de partida per altres anàlisis destinades a confirmar gens putatius i a estudiar a nivell biològic i molecular els mecanismes de carcinogènesi, i orientades a la recerca de noves dianes terapèutiques i mètodes de diagnosi i pronòstic més eficients.
Resumo:
Soils on gypsum are well known in dry climates, but were very little described in temperate climate, and never in Switzerland. This study aims to describe soils affected by gypsum in temperate climate and to understand their pedogenesis using standard laboratory analyzes performed on ten Swiss soils located on gypsum outcrops. In parallel, phytosociological relevés described the vegetation encountered in gypsiferous grounds. Gypsification process (secondary gypsum enrichment by precipitation) was observed in all soils. It was particularly important in regions where potential evapotranspiration exceed strongly precipitations in summer (central Valais, Chablais under influence of warm wind). Gypsum contents were regularly measured above 20% in deep horizons, and exceeded locally 70%, building a white, indurate horizon. However, the absence of such a gypsic horizon in the top soil hindered the use of gypsosol (according to the Référentiel pédologique, BAIZE & GIRARD 2009), the typical name of soils affected by gypsum, but restricted to dry regions. As all soils had a high content of magnesium carbonates, they were logically classified in the group of DOLOMITOSOLS. However, according to the World Reference Base for Soil Resources (IUSS 2014), five soils can be classified among the Gypsisols, criteria being here less restricting. These soils are characterized by a coarse texture and a particulate brittle structure making a filtering substrate. They allow water to flow easily taking nutrients. They are not retained by clay, which does generally not exceed 1% of the fine material. The saturation of calcium blocks the breakdown of organic matter. Moreover, these soils are often rejuvenated by erosion caused by the rough relief due to gypsum (landslides, sinkholes, cliffs and slopes). Hence, the vegetation is mainly characterized by calcareous and drought tolerant species, with mostly xerothermophilic beech (Cephalanthero-Fagenion) and pine forests (Erico-Pinion sylvestris) in lowlands, or subalpine heathlands (Ericion) and dry calcareous grasslands (Caricion firmae) in higher elevations.
Resumo:
Defining the limits of an urban agglomeration is essential both for fundamental and applied studies in quantitative and theoretical geography. A simple and consistent way for defining such urban clusters is important for performing different statistical analysis and comparisons. Traditionally, agglomerations are defined using a rather qualitative approach based on various statistical measures. This definition varies generally from one country to another, and the data taken into account are different. In this paper, we explore the use of the City Clustering Algorithm (CCA) for the agglomeration definition in Switzerland. This algorithm provides a systemic and easy way to define an urban area based only on population data. The CCA allows the specification of the spatial resolution for defining the urban clusters. The results from different resolutions are compared and analysed, and the effect of filtering the data investigated. Different scales and parameters allow highlighting different phenomena. The study of Zipf's law using the visual rank-size rule shows that it is valid only for some specific urban clusters, inside a narrow range of the spatial resolution of the CCA. The scale where emergence of one main cluster occurs can also be found in the analysis using Zipf's law. The study of the urban clusters at different scales using the lacunarity measure - a complementary measure to the fractal dimension - allows to highlight the change of scale at a given range.
Resumo:
BACKGROUND: DNA sequence polymorphisms analysis can provide valuable information on the evolutionary forces shaping nucleotide variation, and provides an insight into the functional significance of genomic regions. The recent ongoing genome projects will radically improve our capabilities to detect specific genomic regions shaped by natural selection. Current available methods and software, however, are unsatisfactory for such genome-wide analysis. RESULTS: We have developed methods for the analysis of DNA sequence polymorphisms at the genome-wide scale. These methods, which have been tested on a coalescent-simulated and actual data files from mouse and human, have been implemented in the VariScan software package version 2.0. Additionally, we have also incorporated a graphical-user interface. The main features of this software are: i) exhaustive population-genetic analyses including those based on the coalescent theory; ii) analysis adapted to the shallow data generated by the high-throughput genome projects; iii) use of genome annotations to conduct a comprehensive analyses separately for different functional regions; iv) identification of relevant genomic regions by the sliding-window and wavelet-multiresolution approaches; v) visualization of the results integrated with current genome annotations in commonly available genome browsers. CONCLUSION: VariScan is a powerful and flexible suite of software for the analysis of DNA polymorphisms. The current version implements new algorithms, methods, and capabilities, providing an important tool for an exhaustive exploratory analysis of genome-wide DNA polymorphism data.
Resumo:
BACKGROUND: Selective laser trabeculoplasty (SLT) is a relatively new treatment strategy for the treatment of glaucoma. Its principle is similar to that of argon laser trabeculoplasty (ALT), but may lead to less damage to the trabecular meshwork. METHODS: We assessed the 2-year efficacy of SLT in a noncomparative consecutive case series. Any adult patient either suspected of having glaucoma or with open-angle glaucoma, whose treatment was judged insufficient to reach target intraocular pressure (IOP), could be recruited. IOP and number of glaucoma treatments were recorded over 2 years after the procedure. RESULTS: Our sample consisted of 44 consecutive eyes of 26 patients, aged 69+/-8 years. Eyes were treated initially on the lower 180 degrees . Three of them were retreated after 15 days on the upper 180. Fourteen eyes had ocular hypertension, 17 primary open-angle/normal-tension glaucoma, 11 pseudoexfoliation (PEX) glaucoma, and two pigmentary glaucoma. Thirty-six eyes had previously been treated and continued to be treated with topical anti-glaucoma medication, ten had had prior ALT, nine iridotomy, and 12 filtering surgery. The 2-year-follow up could not be completed for eight eyes because they needed filtering surgery. In the remaining 36 eyes, IOP decreased by a mean of 17.2%, 3.3 mmHg, (19.2+/-4.7 to 15+/-3.6 mmHg) after 2 years (p<0.001). As a secondary outcome, the number of glaucoma treatments decreased from 1.44 to 1.36 drops/patient. Other results according to subgroups of patients are analyzed: the greatest IOP decrease occurred in eyes that had never been treated with anti-glaucoma medication or with PEX glaucoma. SLT was probably valuable in a few eyes after filtering surgery; however, the statistical power of the study was not strong enough to draw a firm conclusion. When expressed in survival curves after 2 years, however, only 48% and 41% of eyes experienced a decrease of more than 3 mmHg or more than 20% of preoperative intraocular pressure, respectively. CONCLUSION: SLT decreases IOP somewhat for at least 2 years without an increase in topical glaucoma treatment. However, it cannot totally replace topical glaucoma treatment. In the future, patient selection should be improved to decrease the cost/effectiveness ratio.
Resumo:
Learning object repositories are a basic piece of virtual learning environments used for content management. Nevertheless, learning objects have special characteristics that make traditional solutions for content management ine ective. In particular, browsing and searching for learning objects cannot be based on the typical authoritative meta-data used for describing content, such as author, title or publicationdate, among others. We propose to build a social layer on top of a learning object repository, providing nal users with additional services fordescribing, rating and curating learning objects from a teaching perspective. All these interactions among users, services and resources can be captured and further analyzed, so both browsing and searching can be personalized according to user pro le and the educational context, helping users to nd the most valuable resources for their learning process. In this paper we propose to use reputation schemes and collaborative filtering techniques for improving the user interface of a DSpace based learning object repository.
Resumo:
This paper proposes a novel high capacity robust audio watermarking algorithm by using the high frequency band of the wavelet decomposition at which the human auditory system (HAS) is not very sensitive to alteration. The main idea is to divide the high frequency band into frames and, for embedding, to change the wavelet samples depending on the average of relevant frame¿s samples. The experimental results show that the method has a very high capacity (about 11,000 bps), without significant perceptual distortion (ODG in [¿1 ,0] and SNR about 30dB), and provides robustness against common audio signal processing such as additive noise, filtering, echo and MPEG compression (MP3).
Resumo:
Peer-reviewed
Resumo:
Peer-reviewed
Resumo:
In this paper, an advanced technique for the generation of deformation maps using synthetic aperture radar (SAR) data is presented. The algorithm estimates the linear and nonlinear components of the displacement, the error of the digital elevation model (DEM) used to cancel the topographic terms, and the atmospheric artifacts from a reduced set of low spatial resolution interferograms. The pixel candidates are selected from those presenting a good coherence level in the whole set of interferograms and the resulting nonuniform mesh tessellated with the Delauney triangulation to establish connections among them. The linear component of movement and DEM error are estimated adjusting a linear model to the data only on the connections. Later on, this information, once unwrapped to retrieve the absolute values, is used to calculate the nonlinear component of movement and atmospheric artifacts with alternate filtering techniques in both the temporal and spatial domains. The method presents high flexibility with respect to the required number of images and the baselines length. However, better results are obtained with large datasets of short baseline interferograms. The technique has been tested with European Remote Sensing SAR data from an area of Catalonia (Spain) and validated with on-field precise leveling measurements.
Resumo:
A particular property of the matched desiredimpulse response receiver is introduced in this paper, namely,the fact that full exploitation of the diversity is obtained withmultiple beamformers when the channel is spatially and timelydispersive. This particularity makes the receiver specially suitablefor mobile and underwater communications. The new structureprovides better performance than conventional and weightedVRAKE receivers, and a diversity gain with no needs of additionalradio frequency equipment. The baseband hardware neededfor this new receiver may be obtained through reconfigurabilityof the RAKE architectures available at the base station. Theproposed receiver is tested through simulations assuming UTRAfrequency-division-duplexing mode.
Resumo:
A general criterion for the design of adaptive systemsin digital communications called the statistical reference criterionis proposed. The criterion is based on imposition of the probabilitydensity function of the signal of interest at the outputof the adaptive system, with its application to the scenario ofhighly powerful interferers being the main focus of this paper.The knowledge of the pdf of the wanted signal is used as adiscriminator between signals so that interferers with differingdistributions are rejected by the algorithm. Its performance isstudied over a range of scenarios. Equations for gradient-basedcoefficient updates are derived, and the relationship with otherexisting algorithms like the minimum variance and the Wienercriterion are examined.
Resumo:
Line converters have become an attractive AC/DC power conversion solution in industrial applications. Line converters are based on controllable semiconductor switches, typically insulated gate bipolar transistors. Compared to the traditional diode bridge-based power converters line converters have many advantageous characteristics, including bidirectional power flow, controllable de-link voltage and power factor and sinusoidal line current. This thesis considers the control of the lineconverter and its application to power quality improving. The line converter control system studied is based on the virtual flux linkage orientation and the direct torque control (DTC) principle. A new DTC-based current control scheme is introduced and analyzed. The overmodulation characteristics of the DTC converter are considered and an analytical equation for the maximum modulation index is derived. The integration of the active filtering features to the line converter isconsidered. Three different active filtering methods are implemented. A frequency-domain method, which is based on selective harmonic sequence elimination, anda time-domain method, which is effective in a wider frequency band, are used inharmonic current compensation. Also, a voltage feedback active filtering method, which mitigates harmonic sequences of the grid voltage, is implemented. The frequency-domain and the voltage feedback active filtering control systems are analyzed and controllers are designed. The designs are verified with practical measurements. The performance and the characteristics of the implemented active filtering methods are compared and the effect of the L- and the LCL-type line filteris discussed. The importance of the correct grid impedance estimate in the voltage feedback active filter control system is discussed and a new measurement-based method to obtain it is proposed. Also, a power conditioning system (PCS) application of the line converter is considered. A new method for correcting the voltage unbalance of the PCS-fed island network is proposed and experimentally validated.