35 resultados para Processing wikipedia data

em Université de Lausanne, Switzerland


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Researchers working in the field of global connectivity analysis using diffusion magnetic resonance imaging (MRI) can count on a wide selection of software packages for processing their data, with methods ranging from the reconstruction of the local intra-voxel axonal structure to the estimation of the trajectories of the underlying fibre tracts. However, each package is generally task-specific and uses its own conventions and file formats. In this article we present the Connectome Mapper, a software pipeline aimed at helping researchers through the tedious process of organising, processing and analysing diffusion MRI data to perform global brain connectivity analyses. Our pipeline is written in Python and is freely available as open-source at www.cmtk.org.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

RESUME Durant les dernières années, les méthodes électriques ont souvent été utilisées pour l'investigation des structures de subsurface. L'imagerie électrique (Electrical Resistivity Tomography, ERT) est une technique de prospection non-invasive et spatialement intégrée. La méthode ERT a subi des améliorations significatives avec le développement de nouveaux algorithmes d'inversion et le perfectionnement des techniques d'acquisition. La technologie multicanale et les ordinateurs de dernière génération permettent la collecte et le traitement de données en quelques heures. Les domaines d'application sont nombreux et divers: géologie et hydrogéologie, génie civil et géotechnique, archéologie et études environnementales. En particulier, les méthodes électriques sont souvent employées dans l'étude hydrologique de la zone vadose. Le but de ce travail est le développement d'un système de monitorage 3D automatique, non- invasif, fiable, peu coûteux, basé sur une technique multicanale et approprié pour suivre les variations de résistivité électrique dans le sous-sol lors d'événements pluvieux. En raison des limitations techniques et afin d'éviter toute perturbation physique dans la subsurface, ce dispositif de mesure emploie une installation non-conventionnelle, où toutes les électrodes de courant sont placées au bord de la zone d'étude. Le dispositif le plus approprié pour suivre les variations verticales et latérales de la résistivité électrique à partir d'une installation permanente a été choisi à l'aide de modélisations numériques. Les résultats démontrent que le dispositif pôle-dipôle offre une meilleure résolution que le dispositif pôle-pôle et plus apte à détecter les variations latérales et verticales de la résistivité électrique, et cela malgré la configuration non-conventionnelle des électrodes. Pour tester l'efficacité du système proposé, des données de terrain ont été collectées sur un site d'étude expérimental. La technique de monitorage utilisée permet de suivre le processus d'infiltration 3D pendant des événements pluvieux. Une bonne corrélation est observée entre les résultats de modélisation numérique et les données de terrain, confirmant par ailleurs que le dispositif pôle-dipôle offre une meilleure résolution que le dispositif pôle-pôle. La nouvelle technique de monitorage 3D de résistivité électrique permet de caractériser les zones d'écoulement préférentiel et de caractériser le rôle de la lithologie et de la pédologie de manière quantitative dans les processus hydrologiques responsables d'écoulement de crue. ABSTRACT During the last years, electrical methods were often used for the investigation of subsurface structures. Electrical resistivity tomography (ERT) has been reported to be a useful non-invasive and spatially integrative prospecting technique. The ERT method provides significant improvements, with the developments of new inversion algorithms, and the increasing efficiency of data collection techniques. Multichannel technology and powerful computers allow collecting and processing resistivity data within few hours. Application domains are numerous and varied: geology and hydrogeology, civil engineering and geotechnics, archaeology and environmental studies. In particular, electrical methods are commonly used in hydrological studies of the vadose zone. The aim of this study was to develop a multichannel, automatic, non-invasive, reliable and inexpensive 3D monitoring system designed to follow electrical resistivity variations in soil during rainfall. Because of technical limitations and in order to not disturb the subsurface, the proposed measurement device uses a non-conventional electrode set-up, where all the current electrodes are located near the edges of the survey grid. Using numerical modelling, the most appropriate arrays were selected to detect vertical and lateral variations of the electrical resistivity in the framework of a permanent surveying installation system. The results show that a pole-dipole array has a better resolution than a pole-pole array and can successfully follow vertical and lateral resistivity variations despite the non-conventional electrode configuration used. Field data are then collected at a test site to assess the efficiency of the proposed monitoring technique. The system allows following the 3D infiltration processes during a rainfall event. A good correlation between the results of numerical modelling and field data results can be observed since the field pole-dipole data give a better resolution image than the pole-pole data. The new device and technique makes it possible to better characterize the zones of preferential flow and to quantify the role of lithology and pedology in flood- generating hydrological processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The optimal coronary MR angiography sequence has yet to be determined. We sought to quantitatively and qualitatively compare four coronary MR angiography sequences. SUBJECTS AND METHODS. Free-breathing coronary MR angiography was performed in 12 patients using four imaging sequences (turbo field-echo, fast spin-echo, balanced fast field-echo, and spiral turbo field-echo). Quantitative comparisons, including signal-to-noise ratio, contrast-to-noise ratio, vessel diameter, and vessel sharpness, were performed using a semiautomated analysis tool. Accuracy for detection of hemodynamically significant disease (> 50%) was assessed in comparison with radiographic coronary angiography. RESULTS: Signal-to-noise and contrast-to-noise ratios were markedly increased using the spiral (25.7 +/- 5.7 and 15.2 +/- 3.9) and balanced fast field-echo (23.5 +/- 11.7 and 14.4 +/- 8.1) sequences compared with the turbo field-echo (12.5 +/- 2.7 and 8.3 +/- 2.6) sequence (p < 0.05). Vessel diameter was smaller with the spiral sequence (2.6 +/- 0.5 mm) than with the other techniques (turbo field-echo, 3.0 +/- 0.5 mm, p = 0.6; balanced fast field-echo, 3.1 +/- 0.5 mm, p < 0.01; fast spin-echo, 3.1 +/- 0.5 mm, p < 0.01). Vessel sharpness was highest with the balanced fast field-echo sequence (61.6% +/- 8.5% compared with turbo field-echo, 44.0% +/- 6.6%; spiral, 44.7% +/- 6.5%; fast spin-echo, 18.4% +/- 6.7%; p < 0.001). The overall accuracies of the sequences were similar (range, 74% for turbo field-echo, 79% for spiral). Scanning time for the fast spin-echo sequences was longest (10.5 +/- 0.6 min), and for the spiral acquisitions was shortest (5.2 +/- 0.3 min). CONCLUSION: Advantages in signal-to-noise and contrast-to-noise ratios, vessel sharpness, and the qualitative results appear to favor spiral and balanced fast field-echo coronary MR angiography sequences, although subjective accuracy for the detection of coronary artery disease was similar to that of other sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A better integration of the information conveyed by traces within intelligence-led framework would allow forensic science to participate more intensively to security assessments through forensic intelligence (part I). In this view, the collection of data by examining crime scenes is an entire part of intelligence processes. This conception frames our proposal for a model that promotes to better use knowledge available in the organisation for driving and supporting crime scene examination. The suggested model also clarifies the uncomfortable situation of crime scene examiners who must simultaneously comply with justice needs and expectations, and serve organisations that are mostly driven by broader security objectives. It also opens new perspective for forensic science and crime scene investigation, by the proposal to follow other directions than the traditional path suggested by dominant movements in these fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A methodology of exploratory data analysis investigating the phenomenon of orographic precipitation enhancement is proposed. The precipitation observations obtained from three Swiss Doppler weather radars are analysed for the major precipitation event of August 2005 in the Alps. Image processing techniques are used to detect significant precipitation cells/pixels from radar images while filtering out spurious effects due to ground clutter. The contribution of topography to precipitation patterns is described by an extensive set of topographical descriptors computed from the digital elevation model at multiple spatial scales. Additionally, the motion vector field is derived from subsequent radar images and integrated into a set of topographic features to highlight the slopes exposed to main flows. Following the exploratory data analysis with a recent algorithm of spectral clustering, it is shown that orographic precipitation cells are generated under specific flow and topographic conditions. Repeatability of precipitation patterns in particular spatial locations is found to be linked to specific local terrain shapes, e.g. at the top of hills and on the upwind side of the mountains. This methodology and our empirical findings for the Alpine region provide a basis for building computational data-driven models of orographic enhancement and triggering of precipitation. Copyright (C) 2011 Royal Meteorological Society .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The metalloprotease meprin has been implicated in tissue remodelling due to its capability to degrade extracellular matrix components. Here, we investigated the susceptibility of tenascin-C to cleavage by meprin beta and the functional properties of its proteolytic fragments. A set of monoclonal antibodies against chicken and human tenascin-C allowed the mapping of proteolytic fragments generated by meprin beta. In chicken tenascin-C, meprin beta processed all three major splicing variants by removal of 10 kDa N-terminal and 38 kDa C-terminal peptides, leaving a large central part of subunits intact. IN similar cleavage pattern was found for large human tenascin-C variant where two N-terminal peptides (10 or 15 kDa) and two C-terminal fragments (40 and 55 kDa) were removed from the intact subunit. N-terminal sequencing revealed the exact amino acid positions of cleavage sites. In both chicken and human tenascin-C N-terminal cleavages occurred just before and/or after the heptad repeats involved in subunit oligomerization. In the human protein, an additional cleavage site was identified in the alternative fibronectin type III repeat D. Whereas all these sites are known to be attacked by several other proteases, a unique cleavage by meprin beta was located to the 7th constant fibronectin type III repeat in both chicken and human tenascin-C, thereby removing the C-terminal domain involved in its anti-adhesive activity. In cell adhesion assays meprin beta-digested human tenascin-C was not able to interfere with fibronectin-mediated cell spreading, confirming cleavage in the anti-adhesive domain. Whereas the expression of meprin beta and tenascin-C does not overlap in normal colon tissue, inflamed lesions of the mucosa from patients with Crohn's disease exhibited many meprin beta-positive leukocytes in regions where tenascin-C was strongly induced. Our data indicate that, at least under pathological conditions, meprin beta might attack specific functional sites in tenascin-C that are important for its oligomerization and anti-adhesive activity. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of segmentation methods is a crucial aspect in image processing, especially in the medical imaging field, where small differences between segmented regions in the anatomy can be of paramount importance. Usually, segmentation evaluation is based on a measure that depends on the number of segmented voxels inside and outside of some reference regions that are called gold standards. Although some other measures have been also used, in this work we propose a set of new similarity measures, based on different features, such as the location and intensity values of the misclassified voxels, and the connectivity and the boundaries of the segmented data. Using the multidimensional information provided by these measures, we propose a new evaluation method whose results are visualized applying a Principal Component Analysis of the data, obtaining a simplified graphical method to compare different segmentation results. We have carried out an intensive study using several classic segmentation methods applied to a set of MRI simulated data of the brain with several noise and RF inhomogeneity levels, and also to real data, showing that the new measures proposed here and the results that we have obtained from the multidimensional evaluation, improve the robustness of the evaluation and provides better understanding about the difference between segmentation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT are functionally linked and temporally synchronized during time encoding whereas they are functionally independent and operate serially (V1 followed by V5/MT) while maintaining temporal information in working memory. These data challenge the traditional view of V1 and V5/MT as visuo-spatial features detectors and highlight the functional contribution and the temporal dynamics of these brain regions in the processing of time in millisecond range. The present project resulted in the paper entitled: 'How the visual brain encodes and keeps track of time' by Paolo Salvioni, Lysiann Kalmbach, Micah Murray and Domenica Bueti that is now submitted for publication to the Journal of Neuroscience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sparsely spaced highly permeable fractures of the granitic rock aquifer at Stang-er-Brune (Brittany, France) form a well-connected fracture network of high permeability but unknown geometry. Previous work based on optical and acoustic logging together with single-hole and cross-hole flowmeter data acquired in 3 neighbouring boreholes (70-100 m deep) has identified the most important permeable fractures crossing the boreholes and their hydraulic connections. To constrain possible flow paths by estimating the geometries of known and previously unknown fractures, we have acquired, processed and interpreted multifold, single- and cross-hole GPR data using 100 and 250 MHz antennas. The GPR data processing scheme consisting of timezero corrections, scaling, bandpass filtering and F-X deconvolution, eigenvector filtering, muting, pre-stack Kirchhoff depth migration and stacking was used to differentiate fluid-filled fracture reflections from source generated noise. The final stacked and pre-stack depth-migrated GPR sections provide high-resolution images of individual fractures (dipping 30-90°) in the surroundings (2-20 m for the 100 MHz antennas; 2-12 m for the 250 MHz antennas) of each borehole in a 2D plane projection that are of superior quality to those obtained from single-offset sections. Most fractures previously identified from hydraulic testing can be correlated to reflections in the single-hole data. Several previously unknown major near vertical fractures have also been identified away from the boreholes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Solexa/Illumina short-read ultra-high throughput DNA sequencing technology produces millions of short tags (up to 36 bases) by parallel sequencing-by-synthesis of DNA colonies. The processing and statistical analysis of such high-throughput data poses new challenges; currently a fair proportion of the tags are routinely discarded due to an inability to match them to a reference sequence, thereby reducing the effective throughput of the technology. RESULTS: We propose a novel base calling algorithm using model-based clustering and probability theory to identify ambiguous bases and code them with IUPAC symbols. We also select optimal sub-tags using a score based on information content to remove uncertain bases towards the ends of the reads. CONCLUSION: We show that the method improves genome coverage and number of usable tags as compared with Solexa's data processing pipeline by an average of 15%. An R package is provided which allows fast and accurate base calling of Solexa's fluorescence intensity files and the production of informative diagnostic plots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion. Based on these data and evidence from neurophysiological and neuroimaging studies we discuss the neural mechanisms likely to underlie this effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.