910 resultados para 3D feature extraction
Resumo:
This paper presents a new registration algorithm, called Temporal Di eomorphic Free Form Deformation (TDFFD), and its application to motion and strain quanti cation from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity eld as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement eld is then recovered through forward Eulerian integration of the non-stationary velocity eld. The strain tensor iscomputed locally using the spatial derivatives of the reconstructed displacement eld. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared di erences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on theincompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, bothon displacement and velocity elds, on a set of synthetic 3D US images with di erent noise levels. TDFFDshowed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFDwas applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, theimprovement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quanti ed by the reduction of end-systolic left ventricular volume at follow-up (6 and 12 months), showing the potential of the proposed algorithm for the assessment of CRT.
Resumo:
In this paper we present a description of the role of definitional verbal patterns for the extraction of semantic relations. Several studies show that semantic relations can be extracted from analytic definitions contained in machine-readable dictionaries (MRDs). In addition, definitions found in specialised texts are a good starting point to search for different types of definitions where other semantic relations occur. The extraction of definitional knowledge from specialised corpora represents another interesting approach for the extraction of semantic relations. Here, we present a descriptive analysis of definitional verbal patterns in Spanish and the first steps towards the development of a system for the automatic extraction of definitional knowledge.
Resumo:
Automatic creation of polarity lexicons is a crucial issue to be solved in order to reduce time andefforts in the first steps of Sentiment Analysis. In this paper we present a methodology based onlinguistic cues that allows us to automatically discover, extract and label subjective adjectivesthat should be collected in a domain-based polarity lexicon. For this purpose, we designed abootstrapping algorithm that, from a small set of seed polar adjectives, is capable to iterativelyidentify, extract and annotate positive and negative adjectives. Additionally, the methodautomatically creates lists of highly subjective elements that change their prior polarity evenwithin the same domain. The algorithm proposed reached a precision of 97.5% for positiveadjectives and 71.4% for negative ones in the semantic orientation identification task.
Resumo:
PURPOSE: To investigate magnetization transfer (MT) effects as a new source of contrast for imaging and tracking of peripheral foot nerves. MATERIALS AND METHODS: Two sets of 3D spoiled gradient-echo images acquired with and without a saturation pulse were used to generate MT ratio (MTR) maps of 260 μm in-plane resolution for eight volunteers at 3T. Scan parameters were adjusted to minimize signal loss due to T2 dephasing, and a dedicated coil was used to improve the inherently low signal-to-noise ratio of small voxels. Resulting MTR values in foot nerves were compared with those in surrounding muscle tissue. RESULTS: Average MTR values for muscle (45.5 ± 1.4%) and nerve (21.4 ± 3.1%) were significantly different (P < 0.0001). In general, the difference in MTR values was sufficiently large to allow for intensity-based segmentation and tracking of foot nerves in individual subjects. This procedure was termed MT-based 3D visualization. CONCLUSION: The MTR serves as a new source of contrast for imaging of peripheral foot nerves and provides a means for high spatial resolution tracking of these structures. The proposed methodology is directly applicable on standard clinical MR scanners and could be applied to systemic pathologies, such as diabetes.
Resumo:
This work briefly analyses the difficulties to adopt the Semantic Web, and in particular proposes systems to know the present level of migration to the different technologies that make up the Semantic Web. It focuses on the presentation and description of two tools, DigiDocSpider and DigiDocMetaEdit, designed with the aim of verifYing, evaluating, and promoting its implementation.
Resumo:
Aquest treball és una revisió d'alguns sistemes de Traducció Automàtica que segueixen l'estratègia de Transfer i fan servir estructures de trets com a eina de representació. El treball s'integra dins el projecte MLAP-9315, projecte que investiga la reutilització de les especificacions lingüístiques del projecte EUROTRA per estàndards industrials.
Resumo:
Solid-phase extraction (SPE) in tandem with dispersive liquid-liquid microextraction (DLLME) has been developed for the determination of mononitrotoluenes (MNTs) in several aquatic samples using gas chromatography-flame ionization (GC-FID) detection system. In the hyphenated SPE-DLLME, initially MNTs were extracted from a large volume of aqueous samples (100 mL) into a 500-mg octadecyl silane (C(18) ) sorbent. After the elution of analytes from the sorbent with acetonitrile, the obtained solution was put under the DLLME procedure, so that the extra preconcentration factors could be achieved. The parameters influencing the extraction efficiency such as breakthrough volume, type and volume of the elution solvent (disperser solvent) and extracting solvent, as well as the salt addition, were studied and optimized. The calibration curves were linear in the range of 0.5-500 μg/L and the limit of detection for all analytes was found to be 0.2 μg/L. The relative standard deviations (for 0.75 μg/L of MNTs) without internal standard varied from 2.0 to 6.4% (n=5). The relative recoveries of the well, river and sea water samples, spiked at the concentration level of 0.75 μg/L of the analytes, were in the range of 85-118%.
Resumo:
Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.
Resumo:
Opinnäytteeni teososana on 3D-Kalevala projektinimellä tunnetun animaation partikkeliefektit. Tarkastelen tarkemmin elokuvan Lumi-, Kylä-, Paja- ja Luola-kohtauksia. 3D-Kalevala on tietokoneella tehty animaatio, joka kertoo Suomen kansalliseepoksen Kalevalan päähenkilöstä Väinämöisestä. Elokuvassa vanha Väinämöinen muistelee nuoruutensa tapahtumia. 3D-Kalevala-projekti käynnistettiin vuonna 2003, mutta sen alkuperäiset tekijät eivät saaneet sitä valmiiksi, ja projektin teko keskeytettiin vuonna 2005. Vuoden 2006 keväällä projektiin perustettiin uusi kahden verkkoviestinnän opiskelijan projektiryhmä, jonka tehtävänä oli saada elokuva valmiiksi vuoden 2007 kevääseen mennessä. Kun aloitimme projektin tekemisen, olin kolmiulotteisessa mallinnuksessa aloittelija. Tästä johtuen opinnäytteen kirjallinen osa on opas aloittelijoille partikkelien maailmaan. Selvitän raportissani miten elokuvan partikkeliefektit rakennettiin, mitä niiden tekeminen minulta vaati ja miten ne loppujen lopuksi mielestäni onnistuivat. Elokuvan efektit tehtiin 3D Studio Max-ohjelman versiolla 6.0, ja tämän takia kirjoitankin efektien rakentamisesta kyseisen ohjelman keinoin. Projektin suuruuden vuoksi molemmat tekijät pääsivät tekemään monenlaisia töitä, mutta päävastuualueet olivat selvät. Minun osani oli tehdä elokuvaan efektejä. Partikkeliefektit ovat proseduraalisia efektejä, joiden avulla on mahdollista tehdä aidon näköisiä luonnonilmiöitä, kuten tulta, savua, kipinöitä ja veden roiskeita. Koska partikkeliefektit mallintavat reaalimaailman ilmiöitä, on tekijän hyvä olla kiinnostunut selvittämään ilmiöiden käyttäytymistä luonnossa. Raportoin myös projektin aikana huomaamistani hyvistä tavoista opiskella itsenäisesti partikkelien rakentamiseen käytettyjä tekniikoita. On hyvä lukea 3D Studio Maxin tasokasta käyttöohjesovellusta, tutustua Internetissä löytyviin 3D-aiheisiin foorumeihin, käydä aiheesta tutoriaaleja läpi sekä tutustua ohjelman ominaisuuksiin kokeilemalla ja tutkimalla itsenäisesti. Elokuvan efektit onnistuivat mielestäni kiitettävästi ottaen huomioon lähtötasoni. Löysin tapoja kehittää itseäni ja helppoja keinoja toteuttaa realistista jälkeä efektien rakentamisessa. Toivon, että raportistani olisi jollekin 3D-partikkeliefekteistä kiinnostuneelle hyötyä.
Resumo:
Opinnäytetyöni aiheena on keskeytyneen uusmediatuotannon jatkaminen. Monimuototyön työosana toteutettiin 7-minuuttinen 3D animaatio, joka kertoo Suomen kansalliseepoksen Kalevalan taruhahmosta Väinämöisestä, muistelemassa menneitä. Projekti käynnistettiin alun perin vuonna 2003, mutta resurssien vähetessä se keskeytyi vuoden 2005 loppupuolella. Keväällä 2006 projekti käynnistettiin uuden projektiryhmän voimin, jossa olin itse mukana vastaten muun muassa tuotannonohjauksesta ja hahmoanimoinneista. Uusi projektiryhmä oli henkilöstöresursseiltaan pieni, joten vastuualueet olivat monipuolisia. Keskeytyneen projektin jatkamisen ja haltuunoton haasteellisuus sai minut kiinnostumaan tutkia aihetta tarkemmin. Tuotannonohjaajana vastasin hyvin pitkälle tuotannon uudesta käynnistämisestä ja projektin saattamisesta vihdoin loppuun. Keskeytyneen projektin haltuunotto oli tilanteena kaikille uusi, mikä heijastui vaikeuksina uudelleen käynnistettyyn tuotantoon. Raportin tarkoituksena ei ole olla projektinhallinnallinen käsikirja, sillä käsittelen vain tämän projektin jatkolle oleellisina pidettyjä asioita. Projekti toivottavasti kuitenkin antaa kuvan huolellisen projektinhallinnan ja onnistuneen tuotannonohjauksen tärkeydestä. Jokainen keskeytynyt projekti ei ole aina välttämättä elvytettävissä - ainakaan alkuperäisessä muodossaan. Projektin jatkamista tulisi katsoa aina tapauskohtaisesti. Keskeytymiseen on useimmiten syynsä, joten ongelmien selvittäminen ja niihin puuttuminen on tärkeää ennen jatkopäätöksen tekemistä. Myös projektityöskentelytavat kehittyvät ja pohdin työssäni uusien projektinhallintatapojen, kuten wikien käyttöä projektinhallinnan työkaluna ja projektiyhteisön välistä viestintää edistävänä työkaluna.
Resumo:
The aim of this study was to compare the diagnostic efficiency of plain film and spiral CT examinations with 3D reconstructions of 42 tibial plateau fractures and to assess the accuracy of these two techniques in the pre-operative surgical plan in 22 cases. Forty-two tibial plateau fractures were examined with plain film (anteroposterior, lateral, two obliques) and spiral CT with surface-shaded-display 3D reconstructions. The Swiss AO-ASIF classification system of bone fracture from Muller was used. In 22 cases the surgical plans and the sequence of reconstruction of the fragments were prospectively determined with both techniques, successively, and then correlated with the surgical reports and post-operative plain film. The fractures were underestimated with plain film in 18 of 42 cases (43%). Due to the spiral CT 3D reconstructions, and precise pre-operative information, the surgical plans based on plain film were modified and adjusted in 13 cases among 22 (59%). Spiral CT 3D reconstructions give a better and more accurate demonstration of the tibial plateau fracture and allows a more precise pre-operative surgical plan.
Resumo:
Hyperammonemic disorders in pediatric patients lead to poorly understood irreversible effects on the developing brain that may be life-threatening. We showed previously that some of these NH4+-induced irreversible effects might be due to impairment of axonal growth that can be protected under ammonium exposure by creatine co-treatment. The aim of the present work was thus to analyse how the genes of arginine:glycine amidinotransferase (AGAT) and guanidinoacetate methyltransferase (GAMT), allowing creatine synthesis, as well as of the creatine transporter SLC6A8, allowing creatine uptake into cells, are regulated in rat brain cells under NH4+ exposure. Reaggregated brain cell three-dimensional cultures exposed to NH4Cl were used as an experimental model of hyperammonemia in the developing central nervous system (CNS). We show here that NH4+ exposure differentially alters AGAT, GAMT and SLC6A8 regulation, in terms of both gene expression and protein activity, in a cell type-specific manner. In particular, we demonstrate that NH4+ exposure decreases both creatine and its synthesis intermediate, guanidinoacetate, in brain cells, probably through the inhibition of AGAT enzymatic activity. Our work also suggests that oligodendrocytes are major actors in the brain in terms of creatine synthesis, trafficking and uptake, which might be affected by hyperammonemia. Finally, we show that NH4+ exposure induces SLC6A8 in astrocytes. This suggests that hyperammonemia increases blood-brain barrier permeability for creatine. This is normally limited due to the absence of SLC6A8 from the astrocyte feet lining microcapillary endothelial cells, and thus creatine supplementation may protect the developing CNS of hyperammonemic patients.
Resumo:
ABSTRACT: In sexual assault cases, autosomal DNA analysis of gynecological swabs is a challenge, as the presence of a large quantity of female material may prevent the detection of the male DNA. A solution to this problem is differential DNA extraction, but as there are different protocols, it was decided to test their efficiency on simulated casework samples. Four difficult samples were sent to the nine Swiss laboratories active in the forensic genetics. They used their routine protocols to separate the epithelial cell fraction, enriched with the non-sperm DNA, from the sperm fraction. DNA extracts were then sent to the organizing laboratory for analysis. Estimates of male to female DNA ratio without differential DNA extraction ranged from 1:38 to 1:339, depending on the semen used to prepare the samples. After differential DNA extraction, most of the ratios ranged from 1:12 to 9:1, allowing the detection of the male DNA. Compared to direct DNA extraction, cell separation resulted in losses of 94-98% of the male DNA. As expected, more male DNA was generally present in the sperm than in the epithelial cell fraction. However, for about 30% of the samples, the reverse trend was observed. The recovery of male and female DNA was highly variable depending on the laboratories. Experimental design similar to the one used in this study may help for local protocol testing and improvement.
Resumo:
OBJECT: To study a scan protocol for coronary magnetic resonance angiography based on multiple breath-holds featuring 1D motion compensation and to compare the resulting image quality to a navigator-gated free-breathing acquisition. Image reconstruction was performed using L1 regularized iterative SENSE. MATERIALS AND METHODS: The effects of respiratory motion on the Cartesian sampling scheme were minimized by performing data acquisition in multiple breath-holds. During the scan, repetitive readouts through a k-space center were used to detect and correct the respiratory displacement of the heart by exploiting the self-navigation principle in image reconstruction. In vivo experiments were performed in nine healthy volunteers and the resulting image quality was compared to a navigator-gated reference in terms of vessel length and sharpness. RESULTS: Acquisition in breath-hold is an effective method to reduce the scan time by more than 30 % compared to the navigator-gated reference. Although an equivalent mean image quality with respect to the reference was achieved with the proposed method, the 1D motion compensation did not work equally well in all cases. CONCLUSION: In general, the image quality scaled with the robustness of the motion compensation. Nevertheless, the featured setup provides a positive basis for future extension with more advanced motion compensation methods.