90 resultados para Voxels
Resumo:
Imagens de tomografia computadorizada (TC) permitem a visualização, sem distorções ou sobreposições, do complexo maxilo-facial, principalmente do osso alveolar. Estudos demonstraram boa reprodutibilidade e precisão da mensuração da altura da borda alveolar, todavia a influência da espessura óssea ainda é pouco descrita. Através da comparação com a mensuração direta, o objetivo deste estudo foi avaliar a precisão, reprodutibilidade e a influência da espessura óssea, na mensuração da altura da borda alveolar em imagens volumétricas e imagens bidimensionais multiplanares em TC de feixe cônico (TCFC) e em TC espiral (TCE). Utilizando 10 mandíbulas secas de humanos, 57 dentes anteriores foram tomografados em equipamentos iCAT (Imaging Science International, Hatfield, PA, EUA) e Brilliance 64 canais (Philips Eletronics, Eindhoven, Holanda), ambos utilizando voxels de 0,25 mm. Através de imagens volumétricas (3D) e imagens bidimensionais (2D) de cortes multiplanares, foi comparada a mensuração da altura da borda alveolar dessas imagens com a mensuração direta nas mandíbulas, feita por vestibular e lingual, por três avaliadores, com o auxílio de um paquímetro, totalizando 114 bordas alveolares medidas. Alta reprodutibilidade intra-avaliador (0,999 a 0,902) e interavaliador (0,998 e 0,868) foi observada através do índice de correlação intraclasse (ICC). Observou-se alta correlação entre a mensuração direta e indireta da altura da borda alveolar em imagens 2D, sendo r=0,923** e 0,916**, e em imagens 3D, com r=0,929** e 0,954*, em TCFC e TCE, respectivamente. Imagens 2D superestimam a altura da borda alveolar em 0,32 e 0,49 mm e imagens 3D em 0,34 e 0,30 mm, em TCFC e TCE respectivamente. Quando o osso alveolar apresenta espessura de no mínimo 0,6 mm a média da diferença entre medidas diretas e indiretas é de 0,16 e 0,28 mm em imagens 2D e de 0,12 e 0,03 mm em imagens 3D para TCFC e TCE respectivamente, sendo que 95% do limite de concordância varia de -0,46 a 0,79 mm e -0,32 a 0,88 mm em imagens 2D, e de -0,64 a 0,67 mm e -0,57 a 0,62 mm em imagens 3D, para TCFC e TCE respectivamente. Quando o osso alveolar é mais fino do que 0,6 mm a TC é imprecisa, pois 95% do limite de concordância variou de -1,74 a 5,42 mm e -1,64 a 5,42 mm em imagens 2D, e de -3,70 a 4,28 mm e -3,49 a 4,25 mm em imagens 3D, para TCFC e TCE respectivamente. Conclui-se que a mensuração da altura da borda alveolar através de imagens tomográficas apresenta alta reprodutibilidade, sendo que quando a borda alveolar apresenta pelo menos 0,6 mm, a precisão da mensuração é alta, todavia quando esta espessura é menor do que 0,6 mm a técnica é imprecisa.
Resumo:
O objetivo deste trabalho foi desenvolver um estudo morfológico quantitativo e qualitativo da região da sínfise mandibular (SM), através da construção de modelos tridimensionais (3D) e avaliar o seu grau de associação com diferentes classificações de padrões faciais. Foram avaliados 61 crânios secos humanos de adultos jovens com oclusão normal, com idade entre 18 e 45 anos e dentadura completa. Tomografias computadorizadas de feixe cônico (TCFC) de todos os crânios foram obtidas de forma padronizada. O padrão facial foi determinado por método antropométrico e cefalométrico. Utilizando o critério antropométrico, tomando como referência o índice facial (IF), o padrão facial foi classificado em: euriprósopo (≤84,9), mesoprósopo (85,0 - 89,9) e leptoprósopo (≥90,0). Pelo critério cefalométrico, o plano mandibular (FMA) determinou o padrão facial em curto (≤21,0), médio (21,1 - 29,0) e longo (≥29,1); e o índice de altura facial (IAF) classificou a face em hipodivergente (≥0,750), normal (0,749 - 0,650) e hiperdivergente (≤0,649). A construção de modelos 3D, representativos da região da SM, foi realizada com o auxílio do software ITK-SNAP. Os dentes presentes nesta região, incisivos, caninos e pré-molares inferiores, foram separados do modelo por técnica de segmentação semi-automática, seguida de refinamento manual. Em seguida, foram obtidos modelos 3D somente com o tecido ósseo, possibilitando a mensuraçãodo volume ósseo em mm3 (VOL) e da densidade radiográfica, pela média de intensidade dos voxels (Mvox). No programa Geomagic Studio 10 foi feita uma superposição anatômica dos modelos 3D em bestfit para estabelecer um plano de corte padronizado na linha média. Para cada sínfise foi medida a altura (Alt), a largura (Larg) e calculado o índice de proporção entre altura e largura (PAL). A avaliação da presença de defeitos alveolares foi feita diretamente na mandíbula,obtendo-se a média de todas as alturas ósseas alveolares (AltOss) e a média da dimensão das deiscências presentes (Medef). O índice de correlação intra-classe (ICC) com valores entre 0,923 a 0,994,indicou alta reprodutibilidade e confiabilidade das variáveis medidas. As diferenças entre os grupos, determinados pelas classificações do padrão facial (IF, FMA e IAF), foram avaliadas através da análise de variância (oneway ANOVA) seguida do teste post-hoc de Tukey. O grau de associação entre o padrão facial e as variáveis Vol, Mvox, PAL, Alt, Larg, AltOss e Medef foi avaliado pelo coeficiente de correlação de Pearson com um teste t para r. Os resultados indicaram ausência de diferença ou associação entre o volume, densidade radiográfica e presença de defeitos alveolares da SM e o padrão facial quando determinado pelo IF, FMA e IAF. Verificou-se tendência de SM mais longas nos indivíduos com face alongada, porém a largura não mostrou associação com o padrão facial. Estes resultados sugerem que as classificações utilizadas para determinar o padrão facial não representam satisfatoriamente o caráter 3D da face humana e não estão associadas com a morfologia da SM.
Resumo:
This paper proposes a novel framework to construct a geometric and photometric model of a viewed object that can be used for visualisation in arbitrary pose and illumination. The method is solely based on images and does not require any specialised equipment. We assume that the object has a piece-wise smooth surface and that its reflectance can be modelled using a parametric bidirectional reflectance distribution function. Without assuming any prior knowledge on the object, geometry and reflectance have to be estimated simultaneously and occlusion and shadows have to be treated consistently. We exploit the geometric and photometric consistency using the fact that surface orientation and reflectance are local invariants. In a first implementation, we demonstrate the method using a Lambertian object placed on a turn-table and illuminated by a number of unknown point light-sources. A discrete voxel model is initialised to the visual hull and voxels identified as inconsistent with the invariants are removed iteratively. The resulting model is used to render images in novel pose and illumination. © 2004 Elsevier B.V. All rights reserved.
Resumo:
Reading is an important human-specific skill obtained through extensive learning experience and is reliance on the ability to rapidly recognize single words. According to the behavioral studies, the most important stage of reading is the representation of “visual word form”, which is independent on surface visual features of the reading materials. The prelexical visual word form representation is characterized by the abstractive and highly effective and precise processing. Neuroimaging and neuropsychological studies have investigated the neural basis underlying the visual word form processing. On the basis of summary of the existing literature, the current thesis aimed to address three fundamental questions involving neural basis of word recognition. First, is there a dedicated neural network that is specialized for word recognition? Second, is the orthographic information represented in the putative word/character selective region (VWFA)? Third, what is the role of reading experience in the genesis of the VWFA, is experience a main driver to shape VWFA instead of evolutionary selectivity? Nineteen Chinese literate volunteers, 5 Chinese illiterates and 4 native English speakers participated in this study, and performed perceptual tasks during fMRI scanning. To address the first question, we compared the differential responses to three categories of visual objects, i.e., faces, line drawings of objects and Chinese characters, and defined the region of interesting (ROI) for the next experiment. To address the second question, Chinese character orthography was manipulated to reveal possible differential responses to real characters, false characters, radical combinations, and stroke combinations in the regions defined by the first experiment. To examine the role of reading experience in genesis of specialization for character, the responses for unfamiliar Chinese characters in Chinese illiterates and native English speakers were compared with that in the Chinese literates, and tracked the change in cortical activation after a short-term reading training in the illiterates. Data were analyzed in two dimensions. Both BOLD signal amplitude and spatial distribution pattern among multi-voxels were used to systematically investigate the responsiveness of the left fusiform gyrus to Chinese characters. Our results provide strong and clear evidence for the existence of functionally specialized regions in the human ventral occipital-temporal cortex. In the skilled readers a region specialized for written words could be consistently found in the lateral part of the left fusiform gyrus, line drawings in the median part and faces in the middle. Our results further show that spatial distribution analysis, a method that was not commonly used in neuroimaging of reading, appears to be a more effective measurement for category specialization for visual objects processing. Although we failed to provide evidence that VWFA processes orthographic information in terms of signal intensitiy, we do show that response pattern of real characters and radical collections in this area is different from that of false characters and random stroke combinations. Our last set of experiments suggests that the selective bias to reading material is clearly experience dependent. The response to unknown characters in both English speakers/readers and Chinese illiterates is fundamentally different from that of the skilled Chinese readers. The response pattern for unknown characters is more similar to that for line drawings rather as a weak version of character in skilled Chinese readers. Short-term training is not sufficient to produce VWFA bias even when tested with learned characters, rather the learned characters generated a overall upward shift of the activation of the left fusiform region. Formation of a dedicated region specialized for visual word/character might depend on long-term extensive reading experience, or there might be a critical period for reading acquisition.
Resumo:
Space carving has emerged as a powerful method for multiview scene reconstruction. Although a wide variety of methods have been proposed, the quality of the reconstruction remains highly-dependent on the photometric consistency measure, and the threshold used to carve away voxels. In this paper, we present a novel photo-consistency measure that is motivated by a multiset variant of the chamfer distance. The new measure is robust to high amounts of within-view color variance and also takes into account the projection angles of back-projected pixels. Another critical issue in space carving is the selection of the photo-consistency threshold used to determine what surface voxels are kept or carved away. In this paper, a reliable threshold selection technique is proposed that examines the photo-consistency values at contour generator points. Contour generators are points that lie on both the surface of the object and the visual hull. To determine the threshold, a percentile ranking of the photo-consistency values of these generator points is used. This improved technique is applicable to a wide variety of photo-consistency measures, including the new measure presented in this paper. Also presented in this paper is a method to choose between photo-consistency measures, and voxel array resolutions prior to carving using receiver operating characteristic (ROC) curves.
Resumo:
PURPOSE: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D+dual energy+time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. METHODS: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. RESULTS: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. CONCLUSIONS: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.
Resumo:
The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specific and intentional location. Adverse effects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.
In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The first tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been fixed on an optical fiber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.
Hardware design changes were implemented to reduce the overall fiber diameter to <0.9 mm for the nano-crystalline scintillator based fiber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.
For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270˝. The NanoFOD exhibited small changes in angular sensitivity, with a coefficient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacrificed mouse and treatment was delivered at 225 kV with 0.3 mm Cu filter, the dose difference was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 µm thickness to measure small x-ray fields for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic film. Modest differences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 µm) and radiochromic film (320 µm), but these differences have been explained mostly as an artifact due to the geometry used and volumetric effects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al filtration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average difference of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD fiber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.
For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specific doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Differences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specific and x-ray tube filter specific dose rates for all irradiations.
Monte Carlo analysis was used on 1 µm resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on different radiation beam qualities relevant to x-ray and isotope type irradiators. Results and findings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 µm) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies confirmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically significant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.
Resumo:
Nano- and meso-scale simulation of chemical ordering kinetics in nano-layered L1(0)-AB binary intermetallics was performed. In the nano- (atomistic) scale Monte Carlo (MC) technique with vacancy mechanism of atomic migration implemented with diverse models for the system energetics was used. The meso-scale microstructure evolution was, in turn, simulated by means of a MC procedure applied to a system built of meso-scale voxels ordered in particular L1(0) variants. The voxels were free to change the L1(0) variant and interacted with antiphase-boundary energies evaluated within the nano-scale simulations. The study addressed FePt thin layers considered as a material for ultra-high-density magnetic storage media and revealed metastability of the L1(0) c-variant superstructure with monoatomic planes parallel to the (001)-oriented layer surface and off-plane easy magnetization. The layers, originally perfectly ordered in the c-variant, showed discontinuous precipitation of a- and b-L1(0)-variant domains running in parallel with homogeneous disordering (i.e. generation of antisite defects). The domains nucleated heterogeneously on the free monoatomic Fe surface of the layer, grew inwards its volume and relaxed towards an equilibrium microstructure of the system. Two
Resumo:
Proprioceptive information from the foot/ankle provides important information regarding body sway for balance control, especially in situations where visual information is degraded or absent. Given known increases in catastrophic injury due to falls with older age, understanding the neural basis of proprioceptive processing for balance control is particularly important for older adults. In the present study, we linked neural activity in response to stimulation of key foot proprioceptors (i.e., muscle spindles) with balance ability across the lifespan. Twenty young and 20 older human adults underwent proprioceptive mapping; foot tendon vibration was compared with vibration of a nearby bone in an fMRI environment to determine regions of the brain that were active in response to muscle spindle stimulation. Several body sway metrics were also calculated for the same participants on an eyes-closed balance task. Based on regression analyses, multiple clusters of voxels were identified showing a significant relationship between muscle spindle stimulation-induced neural activity and maximum center of pressure excursion in the anterior-posterior direction. In this case, increased activation was associated with greater balance performance in parietal, frontal, and insular cortical areas, as well as structures within the basal ganglia. These correlated regions were age- and foot-stimulation side-independent and largely localized to right-sided areas of the brain thought to be involved in monitoring stimulus-driven shifts of attention. These findings support the notion that, beyond fundamental peripheral reflex mechanisms, central processing of proprioceptive signals from the foot is critical for balance control.
Resumo:
How can we correlate the neural activity in the human brain as it responds to typed words, with properties of these terms (like ‘edible’, ‘fits in hand’)? In short, we want to find latent variables, that jointly explain both the brain activity, as well as the behavioral responses. This is one of many settings of the Coupled Matrix-Tensor Factorization (CMTF) problem.
Can we accelerate any CMTF solver, so that it runs within a few minutes instead of tens of hours to a day, while maintaining good accuracy? We introduce Turbo-SMT, a meta-method capable of doing exactly that: it boosts the performance of any CMTF algorithm, by up to 200x, along with an up to 65 fold increase in sparsity, with comparable accuracy to the baseline.
We apply Turbo-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. Turbo-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy.
Resumo:
How can we correlate neural activity in the human brain as it responds to words, with behavioral data expressed as answers to questions about these same words? In short, we want to find latent variables, that explain both the brain activity, as well as the behavioral responses. We show that this is an instance of the Coupled Matrix-Tensor Factorization (CMTF) problem. We propose Scoup-SMT, a novel, fast, and parallel algorithm that solves the CMTF problem and produces a sparse latent low-rank subspace of the data. In our experiments, we find that Scoup-SMT is 50-100 times faster than a state-of-the-art algorithm for CMTF, along with a 5 fold increase in sparsity. Moreover, we extend Scoup-SMT to handle missing data without degradation of performance. We apply Scoup-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. Scoup-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy. Finally, we demonstrate the generality of Scoup-SMT, by applying it on a Facebook dataset (users, friends, wall-postings); there, Scoup-SMT spots spammer-like anomalies.
Resumo:
The modulation of neural activity in visual cortex is thought to be a key mechanism of visual attention. The investigation of attentional modulation in high-level visual areas, however, is hampered by the lack of clear tuning or contrast response functions. In the present functional magnetic resonance imaging study we therefore systematically assessed how small voxel-wise biases in object preference across hundreds of voxels in the lateral occipital complex were affected when attention was directed to objects. We found that the strength of attentional modulation depended on a voxel's object preference in the absence of attention, a pattern indicative of an amplificatory mechanism. Our results show that such attentional modulation effectively increased the mutual information between voxel responses and object identity. Further, these local modulatory effects led to improved information-based object readout at the level of multi-voxel activation patterns and to an increased reproducibility of these patterns across repeated presentations. We conclude that attentional modulation enhances object coding in local and distributed object representations of the lateral occipital complex.
Resumo:
The algorithm developed uses an octree pyramid in which noise is reduced at the expense of the spatial resolution. At a certain level an unsupervised clustering without spatial connectivity constraints is applied. After the classification, isolated voxels and insignificant regions are removed by assigning them to their neighbours. The spatial resolution is then increased by the downprojection of the regions, level by level. At each level the uncertainty of the boundary voxels is minimised by a dynamic selection and classification of these, using an adaptive 3D filtering. The algorithm is tested using different data sets, including NMR data.
Resumo:
Os sistemas autónomos trazem como mais valia aos cenários de busca e salvamento a possibilidade de minimizar a presença de Humanos em situações de perigo e a capacidade de aceder a locais de difícil acesso. Na dissertação propõe-se endereçar novos métodos para perceção e navegação de veículos aéreos não tripulados (UAV), tendo como foco principal o planeamento de trajetórias e deteção de obstáculos. No que respeita à perceção foi desenvolvido um método para gerar clusters tendo por base os voxels gerados pelo Octomap. Na área de navegação, foram desenvolvidos dois novos métodos de planeamento de trajetórias, GPRM (Grid Probabilistic Roadmap) e PPRM (Particle Probabilistic Roadmap), que tem como método base para o seu desenvolvimento o PRM. O primeiro método desenvolvido, GPRM, espalha as partículas numa grid pré-definida, construindo posteriormente o roadmap na área determinada pela grid e com isto estima o trajeto mais curto até ao ponto destino. O segundo método desenvolvido, PPRM, espalha as partículas pelo cenário de aplicação, gera o roadmap considerando o mapa total e atribui uma probabilidade que irá permitir definir a trajetória otimizada. Para analisar a performance de cada método em comparação com o PRM, efetua-se a sua avaliação em três cenários distintos com recurso ao simulador MORSE.
Resumo:
ABSTRACT: q-Space-based techniques such as diffusion spectrum imaging, q-ball imaging, and their variations have been used extensively in research for their desired capability to delineate complex neuronal architectures such as multiple fiber crossings in each of the image voxels. The purpose of this article was to provide an introduction to the q-space formalism and the principles of basic q-space techniques together with the discussion on the advantages as well as challenges in translating these techniques into the clinical environment. A review of the currently used q-space-based protocols in clinical research is also provided.