992 resultados para Level-sets
Resumo:
Vertebroplasty is a minimally invasive procedure with many benefits; however, the procedure is not without risks and potential complications, of which leakage of the cement out of the vertebral body and into the surrounding tissues is one of the most serious. Cement can leak into the spinal canal, venous system, soft tissues, lungs and intradiscal space, causing serious neurological complications, tissue necrosis or pulmonary embolism. We present a method for automatic segmentation and tracking of bone cement during vertebroplasty procedures, as a first step towards developing a warning system to avoid cement leakage outside the vertebral body. We show that by using active contours based on level sets the shape of the injected cement can be accurately detected. The model has been improved for segmentation as proposed in our previous work by including a term that restricts the level set function to the vertebral body. The method has been applied to a set of real intra-operative X-ray images and the results show that the algorithm can successfully detect different shapes with blurred and not well-defined boundaries, where the classical active contours segmentation is not applicable. The method has been positively evaluated by physicians.
Resumo:
In the present contribution, we characterise law determined convex risk measures that have convex level sets at the level of distributions. By relaxing the assumptions in Weber (Math. Finance 16:419–441, 2006), we show that these risk measures can be identified with a class of generalised shortfall risk measures. As a direct consequence, we are able to extend the results in Ziegel (Math. Finance, 2014, http://onlinelibrary.wiley.com/doi/10.1111/mafi.12080/abstract) and Bellini and Bignozzi (Quant. Finance 15:725–733, 2014) on convex elicitable risk measures and confirm that expectiles are the only elicitable coherent risk measures. Further, we provide a simple characterisation of robustness for convex risk measures in terms of a weak notion of mixture continuity.
Resumo:
La planificación pre-operatoria se ha convertido en una tarea esencial en cirugías y terapias de marcada complejidad, especialmente aquellas relacionadas con órgano blando. Un ejemplo donde la planificación preoperatoria tiene gran interés es la cirugía hepática. Dicha planificación comprende la detección e identificación precisa de las lesiones individuales y vasos así como la correcta segmentación y estimación volumétrica del hígado funcional. Este proceso es muy importante porque determina tanto si el paciente es un candidato adecuado para terapia quirúrgica como la definición del abordaje a seguir en el procedimiento. La radioterapia de órgano blando es un segundo ejemplo donde la planificación se requiere tanto para la radioterapia externa convencional como para la radioterapia intraoperatoria. La planificación comprende la segmentación de tumor y órganos vulnerables y la estimación de la dosimetría. La segmentación de hígado funcional y la estimación volumétrica para planificación de la cirugía se estiman habitualmente a partir de imágenes de tomografía computarizada (TC). De igual modo, en la planificación de radioterapia, los objetivos de la radiación se delinean normalmente sobre TC. Sin embargo, los avances en las tecnologías de imagen de resonancia magnética (RM) están ofreciendo progresivamente ventajas adicionales. Por ejemplo, se ha visto que el ratio de detección de metástasis hepáticas es significativamente superior en RM con contraste Gd–EOB–DTPA que en TC. Por tanto, recientes estudios han destacado la importancia de combinar la información de TC y RM para conseguir el mayor nivel posible de precisión en radioterapia y para facilitar una descripción precisa de las lesiones del hígado. Con el objetivo de mejorar la planificación preoperatoria en ambos escenarios se precisa claramente de un algoritmo de registro no rígido de imagen. Sin embargo, la gran mayoría de sistemas comerciales solo proporcionan métodos de registro rígido. Las medidas de intensidad de voxel han demostrado ser criterios de similitud de imágenes robustos, y, entre ellas, la Información Mutua (IM) es siempre la primera elegida en registros multimodales. Sin embargo, uno de los principales problemas de la IM es la ausencia de información espacial y la asunción de que las relaciones estadísticas entre las imágenes son homogéneas a lo largo de su domino completo. La hipótesis de esta tesis es que la incorporación de información espacial de órganos al proceso de registro puede mejorar la robustez y calidad del mismo, beneficiándose de la disponibilidad de las segmentaciones clínicas. En este trabajo, se propone y valida un esquema de registro multimodal no rígido 3D usando una nueva métrica llamada Información Mutua Centrada en el Órgano (Organ-Focused Mutual Information metric (OF-MI)) y se compara con la formulación clásica de la Información Mutua. Esto permite mejorar los resultados del registro en áreas problemáticas incorporando información regional al criterio de similitud, beneficiándose de la disponibilidad real de segmentaciones en protocolos estándares clínicos, y permitiendo que la dependencia estadística entre las dos modalidades de imagen difiera entre órganos o regiones. El método propuesto se ha aplicado al registro de TC y RM con contraste Gd–EOB–DTPA así como al registro de imágenes de TC y MR para planificación de radioterapia intraoperatoria rectal. Adicionalmente, se ha desarrollado un algoritmo de apoyo de segmentación 3D basado en Level-Sets para la incorporación de la información de órgano en el registro. El algoritmo de segmentación se ha diseñado específicamente para la estimación volumétrica de hígado sano funcional y ha demostrado un buen funcionamiento en un conjunto de imágenes de TC abdominales. Los resultados muestran una mejora estadísticamente significativa de OF-MI comparada con la Información Mutua clásica en las medidas de calidad de los registros; tanto con datos simulados (p<0.001) como con datos reales en registro hepático de TC y RM con contraste Gd– EOB–DTPA y en registro para planificación de radioterapia rectal usando OF-MI multi-órgano (p<0.05). Adicionalmente, OF-MI presenta resultados más estables con menor dispersión que la Información Mutua y un comportamiento más robusto con respecto a cambios en la relación señal-ruido y a la variación de parámetros. La métrica OF-MI propuesta en esta tesis presenta siempre igual o mayor precisión que la clásica Información Mutua y consecuentemente puede ser una muy buena alternativa en aplicaciones donde la robustez del método y la facilidad en la elección de parámetros sean particularmente importantes. Abstract Pre-operative planning has become an essential task in complex surgeries and therapies, especially for those affecting soft tissue. One example where soft tissue preoperative planning is of high interest is liver surgery. It involves the accurate detection and identification of individual liver lesions and vessels as well as the proper functional liver segmentation and volume estimation. This process is very important because it determines whether the patient is a suitable candidate for surgical therapy and the type of procedure. Soft tissue radiation therapy is a second example where planning is required for both conventional external and intraoperative radiotherapy. It involves the segmentation of the tumor target and vulnerable organs and the estimation of the planned dose. Functional liver segmentations and volume estimations for surgery planning are commonly estimated from computed tomography (CT) images. Similarly, in radiation therapy planning, targets to be irradiated and healthy and vulnerable tissues to be protected from irradiation are commonly delineated on CT scans. However, developments in magnetic resonance imaging (MRI) technology are progressively offering advantages. For instance, the hepatic metastasis detection rate has been found to be significantly higher in Gd–EOB–DTPAenhanced MRI than in CT. Therefore, recent studies highlight the importance of combining the information from CT and MRI to achieve the highest level of accuracy in radiotherapy and to facilitate accurate liver lesion description. In order to improve those two soft tissue pre operative planning scenarios, an accurate nonrigid image registration algorithm is clearly required. However, the vast majority of commercial systems only provide rigid registration. Voxel intensity measures have been shown to be robust measures of image similarity, and among them, Mutual Information (MI) is always the first candidate in multimodal registrations. However, one of the main drawbacks of Mutual Information is the absence of spatial information and the assumption that statistical relationships between images are the same over the whole domain of the image. The hypothesis of the present thesis is that incorporating spatial organ information into the registration process may improve the registration robustness and quality, taking advantage of the clinical segmentations availability. In this work, a multimodal nonrigid 3D registration framework using a new Organ- Focused Mutual Information metric (OF-MI) is proposed, validated and compared to the classical formulation of the Mutual Information (MI). It allows improving registration results in problematic areas by adding regional information into the similitude criterion taking advantage of actual segmentations availability in standard clinical protocols and allowing the statistical dependence between the two modalities differ among organs or regions. The proposed method is applied to CT and T1 weighted delayed Gd–EOB–DTPA-enhanced MRI registration as well as to register CT and MRI images in rectal intraoperative radiotherapy planning. Additionally, a 3D support segmentation algorithm based on Level-Sets has been developed for the incorporation of the organ information into the registration. The segmentation algorithm has been specifically designed for the healthy and functional liver volume estimation demonstrating good performance in a set of abdominal CT studies. Results show a statistical significant improvement of registration quality measures with OF-MI compared to MI with both simulated data (p<0.001) and real data in liver applications registering CT and Gd–EOB–DTPA-enhanced MRI and in registration for rectal radiotherapy planning using multi-organ OF-MI (p<0.05). Additionally, OF-MI presents more stable results with smaller dispersion than MI and a more robust behavior with respect to SNR changes and parameters variation. The proposed OF-MI always presents equal or better accuracy than the classical MI and consequently can be a very convenient alternative within applications where the robustness of the method and the facility to choose the parameters are particularly important.
Resumo:
This is an account of some aspects of the geometry of Kahler affine metrics based on considering them as smooth metric measure spaces and applying the comparison geometry of Bakry-Emery Ricci tensors. Such techniques yield a version for Kahler affine metrics of Yau s Schwarz lemma for volume forms. By a theorem of Cheng and Yau, there is a canonical Kahler affine Einstein metric on a proper convex domain, and the Schwarz lemma gives a direct proof of its uniqueness up to homothety. The potential for this metric is a function canonically associated to the cone, characterized by the property that its level sets are hyperbolic affine spheres foliating the cone. It is shown that for an n -dimensional cone, a rescaling of the canonical potential is an n -normal barrier function in the sense of interior point methods for conic programming. It is explained also how to construct from the canonical potential Monge-Ampère metrics of both Riemannian and Lorentzian signatures, and a mean curvature zero conical Lagrangian submanifold of the flat para-Kahler space.
Resumo:
In this thesis, we propose several advances in the numerical and computational algorithms that are used to determine tomographic estimates of physical parameters in the solar corona. We focus on methods for both global dynamic estimation of the coronal electron density and estimation of local transient phenomena, such as coronal mass ejections, from empirical observations acquired by instruments onboard the STEREO spacecraft. We present a first look at tomographic reconstructions of the solar corona from multiple points-of-view, which motivates the developments in this thesis. In particular, we propose a method for linear equality constrained state estimation that leads toward more physical global dynamic solar tomography estimates. We also present a formulation of the local static estimation problem, i.e., the tomographic estimation of local events and structures like coronal mass ejections, that couples the tomographic imaging problem to a phase field based level set method. This formulation will render feasible the 3D tomography of coronal mass ejections from limited observations. Finally, we develop a scalable algorithm for ray tracing dense meshes, which allows efficient computation of many of the tomographic projection matrices needed for the applications in this thesis.
Resumo:
The thesis mainly concerns the study of intrinsically regular submanifolds of low codimension in the Heisenberg group H^n, called H-regular surfaces of low codimension, from the point of view of geometric measure theory. We consider an H-regular surface of H^n of codimension k, with k between 1 and n, parametrized by a uniformly intrinsically differentiable map acting between two homogeneous complementary subgroups of H^n, with target subgroup horizontal of dimension k. In particular the considered submanifold is the intrinsic graph of the parametrization. We extend various results of Ambrosio, Serra Cassano and Vittone, available for the case when k = 1. We prove that the uniform intrinsic differentiability of the parametrizing map is equivalent to the existence and continuity of its intrinsic differential, to the local existence of a suitable approximating family of Euclidean regular maps, and, when the domain and the codomain of the map are orthogonal, to the existence and continuity of suitably defined intrinsic partial derivatives of the function. Successively, we present a series of area formulas, proved in collaboration with V. Magnani. They allow to compute the (2n+2−k)-dimensional spherical Hausdorff measure and the (2n+2−k)-dimensional centered Hausdorff measure of the parametrized H-regular surface, with respect to any homogeneous distance fixed on H^n. Furthermore, we focus on (G,M)-regular sets of G, where G and M are two arbitrary Carnot groups. Suitable implicit function theorems ensure the local existence of an intrinsic parametrization of such a set, at any of its points. We prove that it is uniformly intrinsically differentiable. Finally, we prove a coarea-type inequality for a continuously Pansu differentiable function acting between two Carnot groups endowed with homogeneous distances. We assume that the level sets of the function are uniformly lower Ahlfors regular and that the Pansu differential is everywhere surjective.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.
Resumo:
Locomotor tasks characterization plays an important role in trying to improve the quality of life of a growing elderly population. This paper focuses on this matter by trying to characterize the locomotion of two population groups with different functional fitness levels (high or low) while executing three different tasks-gait, stair ascent and stair descent. Features were extracted from gait data, and feature selection methods were used in order to get the set of features that allow differentiation between functional fitness level. Unsupervised learning was used to validate the sets obtained and, ultimately, indicated that it is possible to distinguish the two population groups. The sets of best discriminate features for each task are identified and thoroughly analysed. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.
Resumo:
Thesis submitted to the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia for the degree of Doctor of Philosophy in Environmental Sciences
Resumo:
1. Statistical modelling is often used to relate sparse biological survey data to remotely derived environmental predictors, thereby providing a basis for predictively mapping biodiversity across an entire region of interest. The most popular strategy for such modelling has been to model distributions of individual species one at a time. Spatial modelling of biodiversity at the community level may, however, confer significant benefits for applications involving very large numbers of species, particularly if many of these species are recorded infrequently. 2. Community-level modelling combines data from multiple species and produces information on spatial pattern in the distribution of biodiversity at a collective community level instead of, or in addition to, the level of individual species. Spatial outputs from community-level modelling include predictive mapping of community types (groups of locations with similar species composition), species groups (groups of species with similar distributions), axes or gradients of compositional variation, levels of compositional dissimilarity between pairs of locations, and various macro-ecological properties (e.g. species richness). 3. Three broad modelling strategies can be used to generate these outputs: (i) 'assemble first, predict later', in which biological survey data are first classified, ordinated or aggregated to produce community-level entities or attributes that are then modelled in relation to environmental predictors; (ii) 'predict first, assemble later', in which individual species are modelled one at a time as a function of environmental variables, to produce a stack of species distribution maps that is then subjected to classification, ordination or aggregation; and (iii) 'assemble and predict together', in which all species are modelled simultaneously, within a single integrated modelling process. These strategies each have particular strengths and weaknesses, depending on the intended purpose of modelling and the type, quality and quantity of data involved. 4. Synthesis and applications. The potential benefits of modelling large multispecies data sets using community-level, as opposed to species-level, approaches include faster processing, increased power to detect shared patterns of environmental response across rarely recorded species, and enhanced capacity to synthesize complex data into a form more readily interpretable by scientists and decision-makers. Community-level modelling therefore deserves to be considered more often, and more widely, as a potential alternative or supplement to modelling individual species.
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
A new phylogenetic analysis of the Nyssorhynchus subgenus (Danoff-Burg and Conn, unpub. data) using six data sets {morphological (all life stages); scanning electron micrographs of eggs; nuclear ITS2 sequences; mitochondrial COII, ND2 and ND6 sequences} revealed different topologies when each data set was analyzed separately but no heterogeneity between the data sets using the arn test. Consequently, the most accurate estimate of the phylogeny was obtained when all the data were combined. This new phylogeny supports a monophyletic Nyssorhynchus subgenus but both previously recognized sections in the subgenus (Albimanus and Argyritarsis) were demonstrated to be paraphyletic relative to each other and four of the seven clades included species previously placed in both sections. One of these clades includes both Anopheles darlingi and An. albimanus, suggesting that the ability to vector malaria effectively may have originated once in this subgenus. Both a conserved (315 bp) and a variable (425 bp) region of the mitochondrial COI gene from 15 populations of An. darlingi from Belize, Bolivia, Brazil, French Guiana, Peru and Venezuela were used to examine the evolutionary history of this species and to test several analytical assumptions. Results demonstrated (1) parsimony analysis is equally informative compared to distance analysis using NJ; (2) clades or clusters are more strongly supported when these two regions are combined compared to either region separately; (3) evidence (in the form of remnants of older haplotype lineages) for two colonization events; and (4) significant genetic divergence within the population from Peixoto de Azevedo (State of Mato Grosso, Brazil). The oldest lineage includes populations from Peixoto, Boa Vista (State of Roraima) and Dourado (State of São Paulo).
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
BACKGROUND: Since the emergence of diffusion tensor imaging, a lot of work has been done to better understand the properties of diffusion MRI tractography. However, the validation of the reconstructed fiber connections remains problematic in many respects. For example, it is difficult to assess whether a connection is the result of the diffusion coherence contrast itself or the simple result of other uncontrolled parameters like for example: noise, brain geometry and algorithmic characteristics. METHODOLOGY/PRINCIPAL FINDINGS: In this work, we propose a method to estimate the respective contributions of diffusion coherence versus other effects to a tractography result by comparing data sets with and without diffusion coherence contrast. We use this methodology to assign a confidence level to every gray matter to gray matter connection and add this new information directly in the connectivity matrix. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate that whereas we can have a strong confidence in mid- and long-range connections obtained by a tractography experiment, it is difficult to distinguish between short connections traced due to diffusion coherence contrast from those produced by chance due to the other uncontrolled factors of the tractography methodology.