890 resultados para LEVEL SET METHODS
Resumo:
Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.
Resumo:
Nutritional status of eight 1.0 and 4.7 years old clones of Eucalyptus grandis, cultivated in a medium textured Ustults - US - and a Quartzipsamments - PS - soils, in Lençóis Paulista, São Paulo, were evaluated by the Diagnosis and Recommendation Integrated System (DRIS) and Critical Level (CL) methods. Based on multivariate discriminant analysis, the DRIS indices described the nutritional status of trees better in relation to tree age and soil type than in relation to nutrient composition. Spearman's correlation coefficients showed statistically significant relationships between volumetric tree growth and nutrients when applying DRIS indices or foliar nutrient concentrations. However, the DRIS indices indicated a lower number of trees with nutritional deficiencies, in relation to the CL method. According to the CL method, P, S, and Ca were deficient in the majority of the soils and tree age categories. By the DRIS method, Ca was the only deficient nutrient in PS soils, and appeared to be particularly limited in one-year-old trees. In conclusion, the DRIS method was more efficient than the CL method in evaluating the nutritional status of eucalyptus trees.
Resumo:
The complete basis set methods CBS-4, CBS-QB3, and CBS-APNO, and the Gaussian methods G2 and G3 were used to calculate the gas phase energy differences between six different carboxylic acids and their respective anions. Two different continuum methods, SM5.42R and CPCM, were used to calculate the free energy differences of solvation for the acids and their anions. Relative pKa values were calculated for each acid using one of the acids as a reference point. The CBS-QB3 and CBS-APNO gas phase calculations, combined with the CPCM/HF/6-31+G(d)//HF/6-31G(d) or CPCM/HF/6-31+G(d)//HF/6-31+G(d) continuum solvation calculations on the lowest energy gas phase conformer, and with the conformationally averaged values, give results accurate to ½ pKa unit. © 2001 American Institute of Physics.
Resumo:
Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.
Resumo:
The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.
We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.
We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.
The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.
Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.
The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.
Resumo:
In this thesis, we propose several advances in the numerical and computational algorithms that are used to determine tomographic estimates of physical parameters in the solar corona. We focus on methods for both global dynamic estimation of the coronal electron density and estimation of local transient phenomena, such as coronal mass ejections, from empirical observations acquired by instruments onboard the STEREO spacecraft. We present a first look at tomographic reconstructions of the solar corona from multiple points-of-view, which motivates the developments in this thesis. In particular, we propose a method for linear equality constrained state estimation that leads toward more physical global dynamic solar tomography estimates. We also present a formulation of the local static estimation problem, i.e., the tomographic estimation of local events and structures like coronal mass ejections, that couples the tomographic imaging problem to a phase field based level set method. This formulation will render feasible the 3D tomography of coronal mass ejections from limited observations. Finally, we develop a scalable algorithm for ray tracing dense meshes, which allows efficient computation of many of the tomographic projection matrices needed for the applications in this thesis.
Resumo:
OBJETIVO: Avaliar a qualidade nutricional das refeições servidas em uma Unidade de Alimentação e Nutrição de uma fábrica da região metropolitana da cidade de São Paulo. MÉTODOS: Dentre os cardápios praticados no período de um ano (242 dias) na unidade mencionada, foram selecionados 30% por sorteio sistemático, e avaliados utilizando-se o Índice de Qualidade da Refeição, com base nas recomendações da Organização Mundial da Saúde e do Ministério da Saúde brasileiro. Esse índice compõe-se de cinco itens que variam entre zero e 20 pontos cada um: adequação na oferta de hortaliças e frutas; oferta de carboidratos; oferta de gordura total; oferta de gordura saturada e variabilidade do cardápio. No período analisado, foram servidas 367 preparações, agrupadas em 30 categorias, segundo composição e forma de preparo. A correlação de Spearman foi utilizada para investigar a correlação do índice com os nutrientes da refeição. As análises foram realizadas no pacote estatístico STATA, considerando-se o nível de significância de 5%. RESULTADOS: O valor médio do Índice de Qualidade da Refeição foi de 64,60 (DP=21,18) pontos, sendo 44% das refeições classificadas como "refeição que necessita de melhora" e apenas 25% como "adequadas". Além do arroz e do feijão, servidos diariamente, as preparações mais frequentes foram: legumes e frutas (30%), massas e cremes (12%), frituras (9%) e sobremesas com creme (8%). Encontrou-se correlação positiva entre o Índice de Qualidade da Refeição e a vitamina C (r=0,32). CONCLUSÃO: Apesar da presença constante de frutas, legumes e verduras, há a necessidade de adequar a oferta das preparações às recomendações para uma alimentação saudável, que efetivamente colaborem na promoção da saúde.
Resumo:
Objective: The aim of this article is to propose an integrated framework for extracting and describing patterns of disorders from medical images using a combination of linear discriminant analysis and active contour models. Methods: A multivariate statistical methodology was first used to identify the most discriminating hyperplane separating two groups of images (from healthy controls and patients with schizophrenia) contained in the input data. After this, the present work makes explicit the differences found by the multivariate statistical method by subtracting the discriminant models of controls and patients, weighted by the pooled variance between the two groups. A variational level-set technique was used to segment clusters of these differences. We obtain a label of each anatomical change using the Talairach atlas. Results: In this work all the data was analysed simultaneously rather than assuming a priori regions of interest. As a consequence of this, by using active contour models, we were able to obtain regions of interest that were emergent from the data. The results were evaluated using, as gold standard, well-known facts about the neuroanatomical changes related to schizophrenia. Most of the items in the gold standard was covered in our result set. Conclusions: We argue that such investigation provides a suitable framework for characterising the high complexity of magnetic resonance images in schizophrenia as the results obtained indicate a high sensitivity rate with respect to the gold standard. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This note gives a theory of state transition matrices for linear systems of fuzzy differential equations. This is used to give a fuzzy version of the classical variation of constants formula. A simple example of a time-independent control system is used to illustrate the methods. While similar problems to the crisp case arise for time-dependent systems, in time-independent cases the calculations are elementary solutions of eigenvalue-eigenvector problems. In particular, for nonnegative or nonpositive matrices, the problems at each level set, can easily be solved in MATLAB to give the level sets of the fuzzy solution. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This study aimed to characterize air pollution and the associated carcinogenic risks of polycyclic aromatic hydrocarbon (PAHs) at an urban site, to identify possible emission sources of PAHs using several statistical methodologies, and to analyze the influence of other air pollutants and meteorological variables on PAH concentrations.The air quality and meteorological data were collected in Oporto, the second largest city of Portugal. Eighteen PAHs (the 16 PAHs considered by United States Environment Protection Agency (USEPA) as priority pollutants, dibenzo[a,l]pyrene, and benzo[j]fluoranthene) were collected daily for 24 h in air (gas phase and in particles) during 40 consecutive days in November and December 2008 by constant low-flow samplers and using polytetrafluoroethylene (PTFE) membrane filters for particulate (PM10 and PM2.5 bound) PAHs and pre-cleaned polyurethane foam plugs for gaseous compounds. The other monitored air pollutants were SO2, PM10, NO2, CO, and O3; the meteorological variables were temperature, relative humidity, wind speed, total precipitation, and solar radiation. Benzo[a]pyrene reached a mean concentration of 2.02 ngm−3, surpassing the EU annual limit value. The target carcinogenic risks were equal than the health-based guideline level set by USEPA (10−6) at the studied site, with the cancer risks of eight PAHs reaching senior levels of 9.98×10−7 in PM10 and 1.06×10−6 in air. The applied statistical methods, correlation matrix, cluster analysis, and principal component analysis, were in agreement in the grouping of the PAHs. The groups were formed according to their chemical structure (number of rings), phase distribution, and emission sources. PAH diagnostic ratios were also calculated to evaluate the main emission sources. Diesel vehicular emissions were the major source of PAHs at the studied site. Besides that source, emissions from residential heating and oil refinery were identified to contribute to PAH levels at the respective area. Additionally, principal component regression indicated that SO2, NO2, PM10, CO, and solar radiation had positive correlation with PAHs concentrations, while O3, temperature, relative humidity, and wind speed were negatively correlated.
Resumo:
Background:Cardiovascular diseases affect people worldwide. Individuals with Down Syndrome (DS) have an up to sixteen-time greater risk of mortality from cardiovascular diseases.Objective:To evaluate the effects of aerobic and resistance exercises on blood pressure and hemodynamic variables of young individuals with DS.Methods:A total of 29 young individuals with DS participated in the study. They were divided into two groups: aerobic training (AT) (n = 14), and resistance training (TR) (n = 15). Their mean age was 15.7 ± 2.82 years. The training program lasted 12 weeks, and had a frequency of three times a week for AT and twice a week for RT. AT was performed in treadmill/ bicycle ergometer, at an intensity between 50%-70% of the HR reserve. RT comprised nine exercises with three sets of 12 repetition-maximum. Systolic blood pressure (SBP), diastolic blood pressure (DBP), mean blood pressure (MBP) and hemodynamic variables were assessed beat-to-beat using the Finometer device before/after the training program. Descriptive analysis, the Shapiro-Wilk test to check the normality of data, and the two-way ANOVA for repeated measures were used to compare pre- and post-training variables. The Pearson’s correlation coefficient was calculated to correlate hemodynamic variables. The SPSS version 18.0 was used with the significance level set at p < 0.05.Results:After twelve weeks of aerobic and/or resistance training, significant reductions in variables SBP, DBP and MBP were observed.Conclusion:This study suggests a chronic hypotensive effect of moderate aerobic and resistance exercises on young individuals with DS.
Resumo:
Motivation. The study of human brain development in itsearly stage is today possible thanks to in vivo fetalmagnetic resonance imaging (MRI) techniques. Aquantitative analysis of fetal cortical surfacerepresents a new approach which can be used as a markerof the cerebral maturation (as gyration) and also forstudying central nervous system pathologies [1]. However,this quantitative approach is a major challenge forseveral reasons. First, movement of the fetus inside theamniotic cavity requires very fast MRI sequences tominimize motion artifacts, resulting in a poor spatialresolution and/or lower SNR. Second, due to the ongoingmyelination and cortical maturation, the appearance ofthe developing brain differs very much from thehomogenous tissue types found in adults. Third, due tolow resolution, fetal MR images considerably suffer ofpartial volume (PV) effect, sometimes in large areas.Today extensive efforts are made to deal with thereconstruction of high resolution 3D fetal volumes[2,3,4] to cope with intra-volume motion and low SNR.However, few studies exist related to the automatedsegmentation of MR fetal imaging. [5] and [6] work on thesegmentation of specific areas of the fetal brain such asposterior fossa, brainstem or germinal matrix. Firstattempt for automated brain tissue segmentation has beenpresented in [7] and in our previous work [8]. Bothmethods apply the Expectation-Maximization Markov RandomField (EM-MRF) framework but contrary to [7] we do notneed from any anatomical atlas prior. Data set &Methods. Prenatal MR imaging was performed with a 1-Tsystem (GE Medical Systems, Milwaukee) using single shotfast spin echo (ssFSE) sequences (TR 7000 ms, TE 180 ms,FOV 40 x 40 cm, slice thickness 5.4mm, in plane spatialresolution 1.09mm). Each fetus has 6 axial volumes(around 15 slices per volume), each of them acquired inabout 1 min. Each volume is shifted by 1 mm with respectto the previous one. Gestational age (GA) ranges from 29to 32 weeks. Mother is under sedation. Each volume ismanually segmented to extract fetal brain fromsurrounding maternal tissues. Then, in-homogeneityintensity correction is performed using [9] and linearintensity normalization is performed to have intensityvalues that range from 0 to 255. Note that due tointra-tissue variability of developing brain someintensity variability still remains. For each fetus, ahigh spatial resolution image of isotropic voxel size of1.09 mm is created applying [2] and using B-splines forthe scattered data interpolation [10] (see Fig. 1). Then,basal ganglia (BS) segmentation is performed on thissuper reconstructed volume. Active contour framework witha Level Set (LS) implementation is used. Our LS follows aslightly different formulation from well-known Chan-Vese[11] formulation. In our case, the LS evolves forcing themean of the inside of the curve to be the mean intensityof basal ganglia. Moreover, we add local spatial priorthrough a probabilistic map created by fitting anellipsoid onto the basal ganglia region. Some userinteraction is needed to set the mean intensity of BG(green dots in Fig. 2) and the initial fitting points forthe probabilistic prior map (blue points in Fig. 2). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed as described in [8]. Results.The case study presented here has 29 weeks of GA. Thehigh resolution reconstructed volume is presented in Fig.1. The steps of BG segmentation are shown in Fig. 2.Overlap in comparison with manual segmentation isquantified by the Dice similarity index (DSI) equal to0.829 (values above 0.7 are considered a very goodagreement). Such BG segmentation has been applied on 3other subjects ranging for 29 to 32 GA and the DSI hasbeen of 0.856, 0.794 and 0.785. Our segmentation of theinner (red and blue contours) and outer cortical surface(green contour) is presented in Fig. 3. Finally, torefine the results we include our WM segmentation in theFreesurfer software [12] and some manual corrections toobtain Fig.4. Discussion. Precise cortical surfaceextraction of fetal brain is needed for quantitativestudies of early human brain development. Our workcombines the well known statistical classificationframework with the active contour segmentation forcentral gray mater extraction. A main advantage of thepresented procedure for fetal brain surface extraction isthat we do not include any spatial prior coming fromanatomical atlases. The results presented here arepreliminary but promising. Our efforts are now in testingsuch approach on a wider range of gestational ages thatwe will include in the final version of this work andstudying as well its generalization to different scannersand different type of MRI sequences. References. [1]Guibaud, Prenatal Diagnosis 29(4) (2009). [2] Rousseau,Acad. Rad. 13(9), 2006, [3] Jiang, IEEE TMI 2007. [4]Warfield IADB, MICCAI 2009. [5] Claude, IEEE Trans. Bio.Eng. 51(4) (2004). [6] Habas, MICCAI (Pt. 1) 2008. [7]Bertelsen, ISMRM 2009 [8] Bach Cuadra, IADB, MICCAI 2009.[9] Styner, IEEE TMI 19(39 (2000). [10] Lee, IEEE Trans.Visual. And Comp. Graph. 3(3), 1997, [11] Chan, IEEETrans. Img. Proc, 10(2), 2001 [12] Freesurfer,http://surfer.nmr.mgh.harvard.edu.
Resumo:
PURPOSE: To determine if, compared to pressure support (PS), neurally adjusted ventilatory assist (NAVA) reduces patient-ventilator asynchrony in intensive care patients undergoing noninvasive ventilation with an oronasal face mask. METHODS: In this prospective interventional study we compared patient-ventilator synchrony between PS (with ventilator settings determined by the clinician) and NAVA (with the level set so as to obtain the same maximal airway pressure as in PS). Two 20-min recordings of airway pressure, flow and electrical activity of the diaphragm during PS and NAVA were acquired in a randomized order. Trigger delay (T(d)), the patient's neural inspiratory time (T(in)), ventilator pressurization duration (T(iv)), inspiratory time in excess (T(iex)), number of asynchrony events per minute and asynchrony index (AI) were determined. RESULTS: The study included 13 patients, six with COPD, and two with mixed pulmonary disease. T(d) was reduced with NAVA: median 35 ms (IQR 31-53 ms) versus 181 ms (122-208 ms); p = 0.0002. NAVA reduced both premature and delayed cyclings in the majority of patients, but not the median T(iex) value. The total number of asynchrony events tended to be reduced with NAVA: 1.0 events/min (0.5-3.1 events/min) versus 4.4 events/min (0.9-12.1 events/min); p = 0.08. AI was lower with NAVA: 4.9 % (2.5-10.5 %) versus 15.8 % (5.5-49.6 %); p = 0.03. During NAVA, there were no ineffective efforts, or late or premature cyclings. PaO(2) and PaCO(2) were not different between ventilatory modes. CONCLUSION: Compared to PS, NAVA improved patient ventilator synchrony during noninvasive ventilation by reducing T(d) and AI. Moreover, with NAVA, ineffective efforts, and late and premature cyclings were absent.
Resumo:
We propose a segmentation method based on the geometric representation of images as 2-D manifolds embedded in a higher dimensional space. The segmentation is formulated as a minimization problem, where the contours are described by a level set function and the objective functional corresponds to the surface of the image manifold. In this geometric framework, both data-fidelity and regularity terms of the segmentation are represented by a single functional that intrinsically aligns the gradients of the level set function with the gradients of the image and results in a segmentation criterion that exploits the directional information of image gradients to overcome image inhomogeneities and fragmented contours. The proposed formulation combines this robust alignment of gradients with attractive properties of previous methods developed in the same geometric framework: 1) the natural coupling of image channels proposed for anisotropic diffusion and 2) the ability of subjective surfaces to detect weak edges and close fragmented boundaries. The potential of such a geometric approach lies in the general definition of Riemannian manifolds, which naturally generalizes existing segmentation methods (the geodesic active contours, the active contours without edges, and the robust edge integrator) to higher dimensional spaces, non-flat images, and feature spaces. Our experiments show that the proposed technique improves the segmentation of multi-channel images, images subject to inhomogeneities, and images characterized by geometric structures like ridges or valleys.
Resumo:
La méthode de projection et l'approche variationnelle de Sasaki sont deux techniques permettant d'obtenir un champ vectoriel à divergence nulle à partir d'un champ initial quelconque. Pour une vitesse d'un vent en haute altitude, un champ de vitesse sur une grille décalée est généré au-dessus d'une topographie donnée par une fonction analytique. L'approche cartésienne nommée Embedded Boundary Method est utilisée pour résoudre une équation de Poisson découlant de la projection sur un domaine irrégulier avec des conditions aux limites mixtes. La solution obtenue permet de corriger le champ initial afin d'obtenir un champ respectant la loi de conservation de la masse et prenant également en compte les effets dûs à la géométrie du terrain. Le champ de vitesse ainsi généré permettra de propager un feu de forêt sur la topographie à l'aide de la méthode iso-niveaux. L'algorithme est décrit pour le cas en deux et trois dimensions et des tests de convergence sont effectués.