995 resultados para 3D integration
Resumo:
En aquest projecte s'analitzen dos algoritmes de correspondència entre imatges amb l'objectiu d'accelerar el procés de reconstrucció 3D mitjançant MVS. S'analitza tot el procés de reconstrucció i a partir d'un software existent es fa la comparació de l'algoritme SIFT i l'algoritme BRISK. A partir dels tests realitzats es conclou que el BRISK és més ràpid i millor per a una reconstrucció 3D.
Resumo:
A high-resolution three-dimensional (3D) seismic reflection system for small-scale targets in lacustrine settings has been developed. Its main characteristics include navigation and shot-triggering software that fires the seismic source at regular distance intervals (max. error of 0.25 m) with real-time control on navigation using differential GPS (Global Positioning System). Receiver positions are accurately calculated (error < 0.20 m) with the aid of GPS antennas attached to the end of each of three 24-channel streamers. Two telescopic booms hold the streamers at a distance of 7.5 m from each other. With a receiver spacing of 2.5 m, the bin dimension is 1.25 m in inline and 3.75 m in crossline direction. To test the system, we conducted a 3D survey of about 1 km(2) in Lake Geneva, Switzerland, over a complex fault zone. A 5-m shot spacing resulted in a nominal fold of 6. A double-chamber bubble-cancelling 15/15 in(3) air gun (40-650 Hz) operated at 80 bars and 1 m depth gave a signal penetration of 300 m below water bottom and a best vertical resolution of 1.1 m. Processing followed a conventional scheme, but had to be adapted to the high sampling rates, and our unconventional navigation data needed conversion to industry standards. The high-quality data enabled us to construct maps of seismic horizons and fault surfaces in three dimensions. The system proves to be well adapted to investigate complex structures by providing non-aliased images of reflectors with dips up to 30 degrees.
Resumo:
P130 A HIGH-RESOLUTION 2D/3D SEISMIC STUDY OF A THRUST FAULT ZONE IN LAKE GENEVA SWITZERLAND M. SCHEIDHAUER M. BERES D. DUPUY and F. MARILLIER Institute of Geophysics University of Lausanne 1015 Lausanne, Switzerland Summary A high-resolution three-dimensional (3D) seismic reflection survey has been conducted in Lake Geneva near the city of Lausanne Switzerland where the faulted molasse basement (Tertiary sandstones) is overlain by complex Quaternary sedimentary structures. Using a single 48-channel streamer an area of 1200 m x 600 m was surveyed in 10 days. With a 5-m shot spacing and a receiver spacing of 2.5 m in the inline direction and 7.5 m in the crossline direction, a 12-fold data coverage was achieved. A maximum penetration depth of ~150 m was achieved with a 15 cu. in. water gun operated at 140 bars. The multi-channel data allow the determination of an accurate velocity field for 3D processing, and they show particularly clean images of the fault zone and the overlying sediments in horizontal and vertical sections. In order to compare different sources, inline 55 was repeated with a 30/30 and a 15/15 cu. in. double-chamber air gun (Mini GI) operated at 100 and 80 bars, respectively. A maximum penetration depth of ~450 m was achieved with this source.
Resumo:
Computed Tomography (CT) represents the standard imaging modality for tumor volume delineation for radiotherapy treatment planning of retinoblastoma despite some inherent limitations. CT scan is very useful in providing information on physical density for dose calculation and morphological volumetric information but presents a low sensitivity in assessing the tumor viability. On the other hand, 3D ultrasound (US) allows a highly accurate definition of the tumor volume thanks to its high spatial resolution but it is not currently integrated in the treatment planning but used only for diagnosis and follow-up. Our ultimate goal is an automatic segmentation of gross tumor volume (GTV) in the 3D US, the segmentation of the organs at risk (OAR) in the CT and the registration of both modalities. In this paper, we present some preliminary results in this direction. We present 3D active contour-based segmentation of the eye ball and the lens in CT images; the presented approach incorporates the prior knowledge of the anatomy by using a 3D geometrical eye model. The automated segmentation results are validated by comparing with manual segmentations. Then, we present two approaches for the fusion of 3D CT and US images: (i) landmark-based transformation, and (ii) object-based transformation that makes use of eye ball contour information on CT and US images.
Resumo:
In conducting genome-wide association studies (GWAS), analytical approaches leveraging biological information may further understanding of the pathophysiology of clinical traits. To discover novel associations with estimated glomerular filtration rate (eGFR), a measure of kidney function, we developed a strategy for integrating prior biological knowledge into the existing GWAS data for eGFR from the CKDGen Consortium. Our strategy focuses on single nucleotide polymorphism (SNPs) in genes that are connected by functional evidence, determined by literature mining and gene ontology (GO) hierarchies, to genes near previously validated eGFR associations. It then requires association thresholds consistent with multiple testing, and finally evaluates novel candidates by independent replication. Among the samples of European ancestry, we identified a genome-wide significant SNP in FBXL20 (P = 5.6 × 10(-9)) in meta-analysis of all available data, and additional SNPs at the INHBC, LRP2, PLEKHA1, SLC3A2 and SLC7A6 genes meeting multiple-testing corrected significance for replication and overall P-values of 4.5 × 10(-4)-2.2 × 10(-7). Neither the novel PLEKHA1 nor FBXL20 associations, both further supported by association with eGFR among African Americans and with transcript abundance, would have been implicated by eGFR candidate gene approaches. LRP2, encoding the megalin receptor, was identified through connection with the previously known eGFR gene DAB2 and extends understanding of the megalin system in kidney function. These findings highlight integration of existing genome-wide association data with independent biological knowledge to uncover novel candidate eGFR associations, including candidates lacking known connections to kidney-specific pathways. The strategy may also be applicable to other clinical phenotypes, although more testing will be needed to assess its potential for discovery in general.
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
The competitiveness of businesses is increasingly dependent on their electronic networks with customers, suppliers, and partners. While the strategic and operational impact of external integration and IOS adoption has been extensively studied, much less attention has been paid to the organizational and technical design of electronic relationships. The objective of our longitudinal research project is the development of a framework for understanding and explaining B2B integration. Drawing on existing literature and empirical cases we present a reference model (a classification scheme for B2B Integration). The reference model comprises technical, organizational, and institutional levels to reflect the multiple facets of B2B integration. In this paper we onvestigate the current state of electronic collaboration in global supply chains focussing on the technical view. Using an indepth case analysis we identify five integration scenarios. In the subsequent confirmatory phase of the research we analyse 112 real-world company cases to validate these five integration scenarios. Our research advances and deepens existing studies by developing a B2B reference model, which reflects the current state of practice and is independent of specific implementation technologies. In the next stage of the research the emerging reference model will be extended to create an assessment model for analysing the maturity level of a given company in a specific supply chain.
Resumo:
Development of cardiac hypertrophy and progression to heart failure entails profound changes in myocardial metabolism, characterized by a switch from fatty acid utilization to glycolysis and lipid accumulation. We report that hypoxia-inducible factor (HIF)1alpha and PPARgamma, key mediators of glycolysis and lipid anabolism, respectively, are jointly upregulated in hypertrophic cardiomyopathy and cooperate to mediate key changes in cardiac metabolism. In response to pathologic stress, HIF1alpha activates glycolytic genes and PPARgamma, whose product, in turn, activates fatty acid uptake and glycerolipid biosynthesis genes. These changes result in increased glycolytic flux and glucose-to-lipid conversion via the glycerol-3-phosphate pathway, apoptosis, and contractile dysfunction. Ventricular deletion of Hif1alpha in mice prevents hypertrophy-induced PPARgamma activation, the consequent metabolic reprogramming, and contractile dysfunction. We propose a model in which activation of the HIF1alpha-PPARgamma axis by pathologic stress underlies key changes in cell metabolism that are characteristic of and contribute to common forms of heart disease.
Resumo:
This file contains the complete ontology (OntoProcEDUOC_OKI_Final.owl). At loading time to edit, the OKI ontology corresponding to the implementation level (OntoOKI_DEFINITIVA.owl)must be imported.
Resumo:
The relief of the seafloor is an important source of data for many scientists. In this paper we present an optical system to deal with underwater 3D reconstruction. This system is formed by three cameras that take images synchronously in a constant frame rate scheme. We use the images taken by these cameras to compute dense 3D reconstructions. We use Bundle Adjustment to estimate the motion ofthe trinocular rig. Given the path followed by the system, we get a dense map of the observed scene by registering the different dense local reconstructions in a unique and bigger one
Resumo:
We propose an algorithm that extracts image features that are consistent with the 3D structure of the scene. The features can be robustly tracked over multiple views and serve as vertices of planar patches that suitably represent scene surfaces, while reducing the redundancy in the description of 3D shapes. In other words, the extracted features will off er good tracking properties while providing the basis for 3D reconstruction with minimum model complexity
Resumo:
We perceive our environment through multiple sensory channels. Nonetheless, research has traditionally focused on the investigation of sensory processing within single modalities. Thus, investigating how our brain integrates multisensory information is of crucial importance for understanding how organisms cope with a constantly changing and dynamic environment. During my thesis I have investigated how multisensory events impact our perception and brain responses, either when auditory-visual stimuli were presented simultaneously or how multisensory events at one point in time impact later unisensory processing. In "Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012) we investigated the neuronal substrates involved in motion detection in depth under multisensory vs. unisensory conditions. We have shown that congruent auditory-visual looming (i.e. approaching) signals are preferentially integrated by the brain. Further, we show that early effects under these conditions are relevant for behavior, effectively speeding up responses to these combined stimulus presentations. In "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), we investigated the behavioral impact of single encounters with meaningless auditory-visual object parings upon subsequent visual object recognition. In addition to showing that these encounters lead to impaired recognition accuracy upon repeated visual presentations, we have shown that the brain discriminates images as soon as ~100ms post-stimulus onset according to the initial encounter context. In "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review) we have addressed whether auditory object recognition is affected by single-trial multisensory memories, and whether recognition accuracy of sounds was similarly affected by the initial encounter context as visual objects. We found that this is in fact the case. We propose that a common underlying brain network is differentially involved during encoding and retrieval of images and sounds based on our behavioral findings. - Nous percevons l'environnement qui nous entoure à l'aide de plusieurs organes sensoriels. Antérieurement, la recherche sur la perception s'est focalisée sur l'étude des systèmes sensoriels indépendamment les uns des autres. Cependant, l'étude des processus cérébraux qui soutiennent l'intégration de l'information multisensorielle est d'une importance cruciale pour comprendre comment notre cerveau travail en réponse à un monde dynamique en perpétuel changement. Pendant ma thèse, j'ai ainsi étudié comment des événements multisensoriels impactent notre perception immédiate et/ou ultérieure et comment ils sont traités par notre cerveau. Dans l'étude " Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012), nous nous sommes intéressés aux processus neuronaux impliqués dans la détection de mouvements à l'aide de l'utilisation de stimuli audio-visuels seuls ou combinés. Nos résultats ont montré que notre cerveau intègre de manière préférentielle des stimuli audio-visuels combinés s'approchant de l'observateur. De plus, nous avons montré que des effets précoces, observés au niveau de la réponse cérébrale, influencent notre comportement, en accélérant la détection de ces stimuli. Dans l'étude "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), nous nous sommes intéressés à l'impact qu'a la présentation d'un stimulus audio-visuel sur l'exactitude de reconnaissance d'une image. Nous avons étudié comment la présentation d'une combinaison audio-visuelle sans signification, impacte, au niveau comportementale et cérébral, sur la reconnaissance ultérieure de l'image. Les résultats ont montré que l'exactitude de la reconnaissance d'images, présentées dans le passé, avec un son sans signification, est inférieure à celle obtenue dans le cas d'images présentées seules. De plus, notre cerveau différencie ces deux types de stimuli très tôt dans le traitement d'images. Dans l'étude "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review), nous nous sommes posés la question si l'exactitude de ia reconnaissance de sons était affectée de manière semblable par la présentation d'événements multisensoriels passés. Ceci a été vérifié par nos résultats. Nous avons proposé que cette similitude puisse être expliquée par le recrutement différentiel d'un réseau neuronal commun.