77 resultados para Recursive Filtering
Resumo:
The human auditory system is comprised of specialized but interacting anatomic and functional pathways encoding object, spatial, and temporal information. We review how learning-induced plasticity manifests along these pathways and to what extent there are common mechanisms subserving such plasticity. A first series of experiments establishes a temporal hierarchy along which sounds of objects are discriminated along basic to fine-grained categorical boundaries and learned representations. A widespread network of temporal and (pre)frontal brain regions contributes to object discrimination via recursive processing. Learning-induced plasticity typically manifested as repetition suppression within a common set of brain regions. A second series considered how the temporal sequence of sound sources is represented. We show that lateralized responsiveness during the initial encoding phase of pairs of auditory spatial stimuli is critical for their accurate ordered perception. Finally, we consider how spatial representations are formed and modified through training-induced learning. A population-based model of spatial processing is supported wherein temporal and parietal structures interact in the encoding of relative and absolute spatial information over the initial ∼300ms post-stimulus onset. Collectively, these data provide insights into the functional organization of human audition and open directions for new developments in targeted diagnostic and neurorehabilitation strategies.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
The ability to model biodiversity patterns is of prime importance in this era of severe environmental crisis. Species assemblage along environmental gradient is subject to the interplay of biotic interactions in complement to abiotic environmental filtering. Accounting for complex biotic interactions for a wide array of species remains so far challenging. Here, we propose to use food web models that can infer the potential interaction links between species as a constraint in species distribution models. Using a plant-herbivore (butterfly) interaction dataset, we demonstrate that this combined approach is able to improve both species distribution and community forecasts. Most importantly, this combined approach is very useful in rendering models of more generalist species that have multiple potential interaction links, where gap in the literature may be recurrent. Our combined approach points a promising direction forward to model the spatial variation of entire species interaction networks. Our work has implications for studies of range shifting species and invasive species biology where it may be unknown how a given biota might interact with a potential invader or in future climate.
Resumo:
A better understanding of the factors that mould ecological community structure is required to accurately predict community composition and to anticipate threats to ecosystems due to global changes. We tested how well stacked climate-based species distribution models (S-SDMs) could predict butterfly communities in a mountain region. It has been suggested that climate is the main force driving butterfly distribution and community structure in mountain environments, and that, as a consequence, climate-based S-SDMs should yield unbiased predictions. In contrast to this expectation, at lower altitudes, climate-based S-SDMs overpredicted butterfly species richness at sites with low plant species richness and underpredicted species richness at sites with high plant species richness. According to two indices of composition accuracy, the Sorensen index and a matching coefficient considering both absences and presences, S-SDMs were more accurate in plant-rich grasslands. Butterflies display strong and often specialised trophic interactions with plants. At lower altitudes, where land use is more intense, considering climate alone without accounting for land use influences on grassland plant richness leads to erroneous predictions of butterfly presences and absences. In contrast, at higher altitudes, where climate is the main force filtering communities, there were fewer differences between observed and predicted butterfly richness. At high altitudes, even if stochastic processes decrease the accuracy of predictions of presence, climate-based S-SDMs are able to better filter out butterfly species that are unable to cope with severe climatic conditions, providing more accurate predictions of absences. Our results suggest that predictions should account for plants in disturbed habitats at lower altitudes but that stochastic processes and heterogeneity at high altitudes may limit prediction success of climate-based S-SDMs.
Resumo:
Purpose: To study the filtering site using ultrasound biomicroscopy. (UBM) after posterior deep sclerectomy with Ex-PRESS? X-50 implant in patients undergoing filtering surgery.¦Methods: Twenty-six patients that participated in this prospective, non comparative study underwent a posterior deep sclerectomy and an Ex- PRESS? X-50 tube implantation. Clinical outcome factors recorded include: intraocular pressure, number of antiglaucoma medications, best corrected visual acuity (BCVA), frequency and types of complications. Six months postoperatively, an ultrasound biomicroscopy examination was performed.¦Results: Mean follow up was 12.0±3.4 months. Mean IOP decreased from 21 ±5.7 mmHg to 12.4±3 mmHg. At last follow-up examination, 65% of eyes had a complete success and 30% a qualified success. The mean number of antiglaucoma medications decreased from 2.5±1.2 preoperatively to 0.7±1 at the last follow-up postoperatively. BCVA was not changed. 27 complications were observed. On the UBM images, the mean intrascleral space volume was 0.25±0.27 mm3 and no relationship was found between volume and intraocular pressure reduction. We noted in 5/26 (19%) eyes a suprachoroïdal hypoechoic. Low-reflective blebs (L-type) were the most common: 15/26 (58%). No correlation between UBM findings and surgical success was evident.¦Conclusions: Deep sclerectomy with Ex-PRESS? X-50 tube implantation seems an efficient glaucoma surgery. It allows satisfactory IOP reduction with a low number of post operative complications. The advantages of deep sclerectomy with collagen implant are maintained with this modified technique. In both, the same reflective types of filtering blebs are present (high, low, encapsulated and flat). The UBM underlines the three mechanisms of aqueous humor resorption previously identified but no correlation with surgical success can be proved.¦-¦Ce travail de thèse est une analyse par ultrasonographic biomicroscopique (UBM) du site de filtration après sclérectomie profonde postérieure modifiée avec implantation d'un tube Ex- PRESS? X-50.¦Vingt six patients atteints d'un glaucome à angle ouvert, ont participé à cette étude prospective et non-comparative. Le critère d'inclusion est un glaucome à angle ouvert non contrôlé malgré un traitement topique maximal.¦Différents types de chirurgie filtrante sont effectués dans la chirurgie du glaucome dont la trabéculectomie et la sclérectomie profonde.¦L'intervention chirurgicale pratiquée dans cette étude consiste en l'implantation d'un tube Ex-PRESS? X-50 de format défini (3 mm de longueur et 50 μπι de diamètre interne) dans la chambre antérieure,au niveau du trabeculum, sous un volet scléral, ce qui permet le drainage de l'humeur aqueuse vers les espaces sous-conjonctivaux, avec diminution de la pression intraoculaire.¦Cette technique implique uniquement une dissection d'un volet scléral superficiel , sans volet scléral profond comme d'une sclérectomie profonde classique.¦Les modes de fonctionnement de cette sclérectomie profonde modifiée sont explorés par UBM, qui donne des images à haute résolution, semblables à des coupes anatomiques. Le volume de l'espace intrascléral créé artificiellement peut en effet être mesuré et mis en corrélation avec la pression intraoculaire et donc avec le taux de succès. Les différents types d'échogénécité de la bulle de filtration sous-conjonctivale provoquée par la dérivation de l'humeur aqueuse sont également observés. La présence éventuelle d'une filtration supplémentaire au niveau choroïdien est aussi détectée.¦De février 2007 à juin 2008, nous avons suivi chez les vingt six yeux des vingt six patients le volume intrascléral, la filtration sous-conjonctivale et la filtration choroïdienne le cas échéant, de même que l'acuité visuelle, la pression intraoculaire, le nombre de traitement antihypertenseur topique et les complications.¦Les résultats démontrent une réduction de 41 % par rapport à la pression intraoculaire préopératoire, ce qui est statistiquement significatif (p<0.0005). En ce qui concerne l'acuité visuelle, les valeurs demeurent stables. Par ailleurs, le nombre de médicaments antiglaucomateux diminue de façon significative de 2.5 ± 1.2 en préopératoire à 0.7 ± 1.0 au dernier examen (p<0.0005). Le volume de l'espace intrascléral, apparaissant toujours en échographie d'aspect fusiforme, n'est pas corrélé de façon significative avec un meilleur succès chirurgical bien que l'on aperçoive une tendance à une corrélation entre un plus grand volume et une pression intraoculaire plus basse.¦La classification la bulle de filtration se fait selon les 4 catégories de bulle de filtration décrites dans la littérature. La répartition révèle une majorité de type L soit hypoéchogène: 15/26 (58%) et une proportion identique, soit, 4/26 (16%), de bulles hyperéchogènes (type H) et encapsulées (type E); les bulles de filtration plates et hyperéchogènes (type F) sont les moins nombreuses 3/26 (11 %).¦La ligne hyporéflective visible dans 19 % des cas entre la sclère et la choroïde représentant potentiellement un drainage suprachoroïdien, n'est pas associée statistiquement à une meilleure filtration et une pression intraoculaire plus basse mais demeure une troisième voie de filtration, en plus de la filtration sous-conjonctivale et intrasclérale.¦En conclusion, cette technique différente, offrant une plus grande sécurité et des résultats satisfaisants sur l'abaissement de la pression intraoculaire, peut être, dans certains cas, une alternative à la sclérectomie profonde classique ,dont elle partage les mécanismes de filtration objectivés par ultrasonographic biomicrioscopique.
Resumo:
BACKGROUND: Euphorbia plants grow in many gardens. Their milky latex is, however, a strong irritant which may induce various ocular lesions from keratoconjunctivitis to severe uveitis. HISTORY AND SIGNS: A 86-year-old woman developed a unilateral severe anterior chamber inflammation associated with descemtic folds after direct contact with sap of Euphorbia. Visual acuity was limited to counting fingers. Her eye was operated from filtering surgery ten years previously. The patient was closely followed to rule out the diagnosis of bacterial endophthalmitis. THERAPY AND OUTCOME: Symptoms progressively resolved after topical administration of 3 mg/mL ofloxacine and 1 % prednisolone acetate. CONCLUSIONS: Euphorbia sap toxicity may take different forms from keratoconjunctivitis to severe uveitis. Euphorbia sap-induced uveitis should be kept in mind when the patient has seen in contact with freshly cut plants.
Resumo:
Interpretability and power of genome-wide association studies can be increased by imputing unobserved genotypes, using a reference panel of individuals genotyped at higher marker density. For many markers, genotypes cannot be imputed with complete certainty, and the uncertainty needs to be taken into account when testing for association with a given phenotype. In this paper, we compare currently available methods for testing association between uncertain genotypes and quantitative traits. We show that some previously described methods offer poor control of the false-positive rate (FPR), and that satisfactory performance of these methods is obtained only by using ad hoc filtering rules or by using a harsh transformation of the trait under study. We propose new methods that are based on exact maximum likelihood estimation and use a mixture model to accommodate nonnormal trait distributions when necessary. The new methods adequately control the FPR and also have equal or better power compared to all previously described methods. We provide a fast software implementation of all the methods studied here; our new method requires computation time of less than one computer-day for a typical genome-wide scan, with 2.5 M single nucleotide polymorphisms and 5000 individuals.
Resumo:
A methodology of exploratory data analysis investigating the phenomenon of orographic precipitation enhancement is proposed. The precipitation observations obtained from three Swiss Doppler weather radars are analysed for the major precipitation event of August 2005 in the Alps. Image processing techniques are used to detect significant precipitation cells/pixels from radar images while filtering out spurious effects due to ground clutter. The contribution of topography to precipitation patterns is described by an extensive set of topographical descriptors computed from the digital elevation model at multiple spatial scales. Additionally, the motion vector field is derived from subsequent radar images and integrated into a set of topographic features to highlight the slopes exposed to main flows. Following the exploratory data analysis with a recent algorithm of spectral clustering, it is shown that orographic precipitation cells are generated under specific flow and topographic conditions. Repeatability of precipitation patterns in particular spatial locations is found to be linked to specific local terrain shapes, e.g. at the top of hills and on the upwind side of the mountains. This methodology and our empirical findings for the Alpine region provide a basis for building computational data-driven models of orographic enhancement and triggering of precipitation. Copyright (C) 2011 Royal Meteorological Society .
Resumo:
PURPOSE: To retrospectively assess the influence of prophylactic cranial irradiation (PCI) timing on brain relapse rates in patients treated with two different chemoradiotherapy (CRT) regimens for Stage IIIB non-small-cell lung cancer (NSCLC). METHODS AND MATERIALS: A cohort of 134 patients, with Stage IIIB NSCLC in recursive partitioning analysis Group 1, was treated with PCI (30 Gy at 2 Gy/fr) following one of two CRT regimens. Regimen 1 (n = 58) consisted of three cycles of induction chemotherapy (ICT) followed by concurrent CRT (C-CRT). Regimen 2 (n = 76) consisted of immediate C-CRT during thoracic radiotherapy. RESULTS: At a median follow-up of 27.6 months (range, 7.2-40.4), 65 patients were alive. Median, progression-free, and brain metastasis-free survival (BMFS) times for the whole study cohort were 23.4, 15.4, and 23.0 months, respectively. Median survival time and the 3-year survival rate for regimens 1 and 2 were 19.3 vs. 26.1 months (p = 0.001) and 14.4% vs. 34.4% (p < .001), respectively. Median time from the initiation of primary treatment to PCI was 123.2 (range, 97-161) and 63.4 (range, 55-74) days for regimens 1 and 2, respectively (p < 0.001). Overall, 11 (8.2%) patients developed brain metastasis (BM) during the follow-up period: 8 (13.8%) in regimen 1 and 3 (3.9%) in regimen 2 (p = 0.03). Only 3 (2.2%) patients developed BM at the site of first failure, and for 2 of them, it was also the sole site of recurrence. Median BMFS for regimens 1 and 2 were 17.4 (13.5-21.3) vs. 26.0 (22.9-29.1 months), respectively (p < 0.001). CONCLUSION: These results suggest that in Stage IIIB NSCLC patients treated with PCI, lower BM incidence and longer survival rates result from immediate C-CRT rather than ITC-first regimens. This indicates the benefit of earlier PCI use without delay because of induction protocols.
Resumo:
BACKGROUND: Ultra high throughput sequencing (UHTS) technologies find an important application in targeted resequencing of candidate genes or of genomic intervals from genetic association studies. Despite the extraordinary power of these new methods, they are still rarely used in routine analysis of human genomic variants, in part because of the absence of specific standard procedures. The aim of this work is to provide human molecular geneticists with a tool to evaluate the best UHTS methodology for efficiently detecting DNA changes, from common SNPs to rare mutations. METHODOLOGY/PRINCIPAL FINDINGS: We tested the three most widespread UHTS platforms (Roche/454 GS FLX Titanium, Illumina/Solexa Genome Analyzer II and Applied Biosystems/SOLiD System 3) on a well-studied region of the human genome containing many polymorphisms and a very rare heterozygous mutation located within an intronic repetitive DNA element. We identify the qualities and the limitations of each platform and describe some peculiarities of UHTS in resequencing projects. CONCLUSIONS/SIGNIFICANCE: When appropriate filtering and mapping procedures are applied UHTS technology can be safely and efficiently used as a tool for targeted human DNA variations detection. Unless particular and platform-dependent characteristics are needed for specific projects, the most relevant parameter to consider in mainstream human genome resequencing procedures is the cost per sequenced base-pair associated to each machine.
Resumo:
In this article we present a method to achieve tri-dimensional contouring of macroscopic objects. A modified reference wave speckle interferometer is used in conjunction with a source of reduced coherence. The depth signal is given by the envelope of the interference signal, directly determined by the coherence length of the source. Fringes are detected in the interferogram obtained by a single shot and are detected by means of adequate filtering. With the approach based on off-axis configuration, a contour line can be extracted from a single acquisition, thus allowing to use the system in harsh environment. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The molecular diagnosis of retinal dystrophies is difficult because of the very important number of genes implicated and is rarely helped by genotype-phenotype correlations. This prompted us to develop IROme, a custom designed in solution-based targeted exon capture assay (SeqCap EZ Choice library, Roche NimbleGen) for 60 retinitis pigmentosa-linked genes and three candidate genes (942 exons). Pyrosequencing was performed on a Roche 454 GS Junior benchtop high-throughput sequencing platform. In total, 23 patients affected by retinitis pigmentosa were analyzed. Per patient, 39.6 Mb were generated, and 1111 sequence variants were detected on average, at a median coverage of 17-fold. After data filtering and sequence variant prioritization, disease-causing mutations were identified in ABCA4, CNGB1, GUCY2D, PROM1, PRPF8, PRPF31, PRPH2, RHO, RP2, and TULP1 for twelve patients (55%), ten mutations having never been reported previously. Potential mutations were identified in 5 additional patients, and in only 6 patients no molecular diagnosis could be established (26%). In conclusion, targeted exon capture and next-generation sequencing are a valuable and efficient approach to identify disease-causing sequence variants in retinal dystrophies.
Resumo:
Background: Community phylogenetics is an emerging field of research that has made important contributions to understanding community assembly. The rapid development of this field can be attributed to the merging of phylogenetics and community ecology research to provide improved clarity on the processes that govern community structure and composition. Question: What are the major challenges that impede the sound interpretation of the patterns and processes of phylogenetic community assembly? Methods: We use four scenarios to illustrate explicitly how the phylogenetic structure of communities can exist in stable or transient phases, based on the different combinations of phylogenetic relationships and phenotypic traits among co-occurring species. We discuss these phases by implicating a two-way process in the assembly and disintegration of the given ecological community. Conclusions: This paper synthesizes the major concepts of community phylogenetics using habitat filtering and competition processes to elucidate how the understanding of phylogenetic community structure is currently hindered by the dynamics of community assembly and disassembly.
Resumo:
For glioblastoma (GBM), survival classification has primarily relied on clinical criteria, exemplified by the Radiation Therapy Oncology Group (RTOG) recursive partitioning analysis (RPA). We sought to improve tumor classification by combining tumor biomarkers with the clinical RPA data. To accomplish this, we first developed 4 molecular biomarkers derived from gene expression profiling, a glioma CpG island methylator phenotype, a novel MGMT promoter methylation assay, and IDH1 mutations. A molecular predictor (MP) model was created with these 4 biomarkers on a training set of 220 retrospectively collected archival GBMtumors. ThisMPwas further combined with RPA classification to develop a molecular-clinical predictor (MCP). The median survivals for the combined, 4-class MCP were 65 months, 31 months, 13 months, and 9 months, which was significantly improved when compared with the RPA alone. The MCP was then applied to 725 samples from the RTOG-0525 cohort, showing median survival for each risk group of NR, 26 months, 16 months, and 11 months. The MCP was significantly improved over the RPA at outcome prediction in the RTOG 0525 cohort with a 33%increase in explained variation with respect to survival, validating the result obtained in the training set. To illustrate the benefit of the MCP for patient stratification, we examined progression-free survival (PFS) for patients receiving standard-dose temozolomide (SD-TMZ) vs. dose-dense TMZ (DD-TMZ) in RPA and MCP risk groups. A significant difference between DD-TMZ and SD-TMZ was observed in the poorest surviving MCP risk group with a median PFS of 6 months vs. 3 months (p ¼ 0.048, log-rank test). This difference was not seen using the RPA classification alone. In summary, we have developed a combined molecular-clinical predictor that appears to improve outcome prediction when compared with clinical variables alone. This MCP may serve to better identify patients requiring intensive treatments beyond the standard of care.