948 resultados para Mathematical morphology analysis
Resumo:
Aggregates provide physical microenvironments for microorganisms, the vital actors of soil systems, and thus play a major role as both, an arena and a product of soil carbon stabilization and dynamics. The surface of an aggregate is what enables exchange of the materials and air and water fluxes between aggregate exterior and interior regions. We made use of 3D images from X-ray CT of aggregates and mathematical morphology to provide an exhaustive quantitative description of soil aggregate morphology that includes both intra-aggregate pore space structure and aggregate surface features. First, the evolution of Minkowski functionals (i.e. volume, boundary surface, curvature and connectivity) for successive dilations of the solid part of aggregates was investigated to quantify its 3D geometrical features. Second, the inner pore space was considered as the object of interest. We devised procedures (a) to define the ends of the accessible pores that are connected to the aggregate surface and (b) to separate accessible and inaccessible porosity. Geometrical Minkowski functionals of the intra-aggregate pore space provide the exhaustive characterization of the inner structure of the aggregates. Aggregates collected from two different soil treatments were analyzed to explore the utility of these morphological tools in capturing the impact on their morphology of two different soil managements, i.e. conventional tillage management, and native succession vegetation treatment. The quantitative tools of mathematical morphology distinguished differences in patterns of aggregate structure associated to the different soil managements.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Using generalized collocation techniques based on fitting functions that are trigonometric (rather than algebraic as in classical integrators), we develop a new class of multistage, one-step, variable stepsize, and variable coefficients implicit Runge-Kutta methods to solve oscillatory ODE problems. The coefficients of the methods are functions of the frequency and the stepsize. We refer to this class as trigonometric implicit Runge-Kutta (TIRK) methods. They integrate an equation exactly if its solution is a trigonometric polynomial with a known frequency. We characterize the order and A-stability of the methods and establish results similar to that of classical algebraic collocation RK methods. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Magnesium borate hydroxide (MBH) nanowhiskers were synthesized using a one step hydrothermal process with different surfactants. The effect surfactants have on the structure and morphology of the MBH nanowhiskers has been investigated. The X-ray diffraction profile confirms that the as-synthesized material is of single phase, monoclinic MgBO2(OH). The variations in the size and shape of the different MBH nanowhiskers have been discussed based on the surface morphology analysis. The annealing of MBH nanowhiskers at 500 °C for 4 h has significant effect on the crystal structure and surface morphology. The UV–vis absorption spectra of the MBH nanowhiskers synthesized with and without surfactants show enhanced absorption in the low-wavelength region, and their optical band gaps were estimated from the optical band edge plots. The photoluminescence spectra of the MBH nanowhiskers produced with and without surfactants show broad emission band with the peak maximum at around 400 nm, which confirms the dominant contribution from the surface defect states.
Resumo:
Mathematical Morphology presents a systematic approach to extract geometric features of binary images, using morphological operators that transform the original image into another by means of a third image called structuring element and came out in 1960 by researchers Jean Serra and George Matheron. Fuzzy mathematical morphology extends the operators towards grayscale and color images and was initially proposed by Goetherian using fuzzy logic. Using this approach it is possible to make a study of fuzzy connectives, which allows some scope for analysis for the construction of morphological operators and their applicability in image processing. In this paper, we propose the development of morphological operators fuzzy using the R-implications for aid and improve image processing, and then to build a system with these operators to count the spores mycorrhizal fungi and red blood cells. It was used as the hypothetical-deductive methodologies for the part formal and incremental-iterative for the experimental part. These operators were applied in digital and microscopic images. The conjunctions and implications of fuzzy morphology mathematical reasoning will be used in order to choose the best adjunction to be applied depending on the problem being approached, i.e., we will use automorphisms on the implications and observe their influence on segmenting images and then on their processing. In order to validate the developed system, it was applied to counting problems in microscopic images, extending to pathological images. It was noted that for the computation of spores the best operator was the erosion of Gödel. It developed three groups of morphological operators fuzzy, Lukasiewicz, And Godel Goguen that can have a variety applications
Resumo:
The aim of this investigation is to analyze the use of the blog as an educational resource for the development of the mathematical communication in secondary education. With this aim, four aspects are analyzed: organization of mathematical thinking through communication; communication of mathematical thinking; analysis and evaluation of the strategies and mathematical thought of others; and expression of mathematical ideas using mathematical language. The research was conducted from a qualitative approach on an exploratory level, with the case study method of 4 classrooms of second grade of secondary education in a private school in Lima. The observational technique of 20 publications in the blog of the math class was applied; a study of a focal group with a sample of 9 students with different levels of academic performance; and an interview with the academic coordinator of the school was conducted. The results show that the organization of mathematical thinking through communication is carried out in the blog in a written, graphical and oral way through explanations, schemes and videos. Regarding communication of mathematical thinking, the blog is used to describe concepts, arguments and mathematical procedures with words and examples of the students. The analysis and evaluation of the strategies and mathematical thinking is performed through comments and debates about the publications. It was also noted that the blog does not facilitate the use of mathematical language to express mathematical ideas, since it does not allow direct writing of symbols nor graphic representation.
Resumo:
Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment (“relaxation” vs. “stress”) are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the “relaxation” vs. “stress” states.
Resumo:
In this paper, we consider Meneghetti & Bicudo's proposal (2003) regarding the constitution of mathematical knowledge and analyze it with respect to the following two focuses: in relation to conceptions of mathematical knowledge following the fundamentalist crisis in mathematics; and in the educational context of mathematics. The investigation of the first focus is done analyzing new claims in mathematical philosophy. The investigation of the second focus is done firstly via a theoretical reflection followed by an examination of the implementation of the proposal in the process of development of didactic materials for teaching and learning Mathematics. Finally, we present the main results of the application of one of those materials.
Resumo:
Intravascular ultrasound (IVUS) image segmentation can provide more detailed vessel and plaque information, resulting in better diagnostics, evaluation and therapy planning. A novel automatic segmentation proposal is described herein; the method relies on a binary morphological object reconstruction to segment the coronary wall in IVUS images. First, a preprocessing followed by a feature extraction block are performed, allowing for the desired information to be extracted. Afterward, binary versions of the desired objects are reconstructed, and their contours are extracted to segment the image. The effectiveness is demonstrated by segmenting 1300 images, in which the outcomes had a strong correlation to their corresponding gold standard. Moreover, the results were also corroborated statistically by having as high as 92.72% and 91.9% of true positive area fraction for the lumen and media adventitia border, respectively. In addition, this approach can be adapted easily and applied to other related modalities, such as intravascular optical coherence tomography and intravascular magnetic resonance imaging. (E-mail: matheuscardosomg@hotmail.com) (C) 2011 World Federation for Ultrasound in Medicine & Biology.
Resumo:
Objectives: We are interested in the numerical simulation of the anastomotic region comprised between outflow canula of LVAD and the aorta. Segmenta¬tion, geometry reconstruction and grid generation from patient-specific data remain an issue because of the variable quality of DICOM images, in particular CT-scan (e.g. metallic noise of the device, non-aortic contrast phase). We pro¬pose a general framework to overcome this problem and create suitable grids for numerical simulations.Methods: Preliminary treatment of images is performed by reducing the level window and enhancing the contrast of the greyscale image using contrast-limited adaptive histogram equalization. A gradient anisotropic diffusion filter is applied to reduce the noise. Then, watershed segmentation algorithms and mathematical morphology filters allow reconstructing the patient geometry. This is done using the InsightToolKit library (www.itk.org). Finally the Vascular Model¬ing ToolKit (www.vmtk.org) and gmsh (www.geuz.org/gmsh) are used to create the meshes for the fluid (blood) and structure (arterial wall, outflow canula) and to a priori identify the boundary layers. The method is tested on five different patients with left ventricular assistance and who underwent a CT-scan exam.Results: This method produced good results in four patients. The anastomosis area is recovered and the generated grids are suitable for numerical simulations. In one patient the method failed to produce a good segmentation because of the small dimension of the aortic arch with respect to the image resolution.Conclusions: The described framework allows the use of data that could not be otherwise segmented by standard automatic segmentation tools. In particular the computational grids that have been generated are suitable for simulations that take into account fluid-structure interactions. Finally the presented method features a good reproducibility and fast application.
Resumo:
Tämä työ käsittelee puutukkien tilavuuden mittaamista värikonenäön avulla. Värikuvat on saatu Simpeleellä olevan metsäteollisuusyrityksen hiomosta. Työssä esitetään perusteellisesti matemaattinen teoria, joka liittyy käytettyihin kuvankäsittelymenetelmiin, kuten luokitteluun, kohinan poistoon ja tukkien segmentointiin. Esitetyt menetelmät implementointiin käytännössä ja eri menetelmillä saatuja tuloksia vertailtiin keskenään. Kuvankäsittelyalgoritmit on implementoitu Matlab 6.0:n avulla. Pääasiassa käytettiin uusinta Image Processing Toolboxia, joka on versio 3.0. Tämä työn näkökulma on pääasiassa käytäntöön soveltava, koska metsäteollsuus on korkealla tasolla Suomessa ja siellä on paljon alan yrityksiä, joissa tässä työssä kehitettyä menetelmää voidaan hyödyntää.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
The removal of organics from copper electrolyte solutions after solvent extraction by dual media filtration is one of the most efficient ways to ensure the clean electrolyte flow into the electrowinning. The clean electrolyte will ensure the good quality cathode plate production. Dual media filtration uses two layers of filter media for filtration as anthracite and garnet respectively. The anthracite layer will help the coalescing of the entrained organic droplets which will then float to the top of the filter, and back to the solvent extraction process. The garnet layer will catch any solids left in the electrolyte traveling through the filter media. This thesis will concentrate on characterization of five different anthracites in order to find some differences using specific surface area analysis, particle size analysis, and morphology analysis. These results are compared to the pressure loss values obtained from lab column tests and bed expansion behavior. The goal of the thesis was to find out if there were any differences in the anthracite which would make the one perform better than the other. There were no big differences found on any aspect of the particle characterization, but some found differences should be further studied in order to confirm the meaning of the porosity, surface area, intensity mean and intensity SD (Standard Deviation) on anthracites and their use in dual media filtration. The thesis work analyzed anthracite samples the way that is not found on any public literature sources, and further studies on the issue would bring more knowledge to the electrolyte process.
Resumo:
The present work deals with the A study of morphological opertors with applications. Morphology is now a.necessary tool for engineers involved with imaging applications. Morphological operations have been viewed as filters the properties of which have been well studied (Heijmans, 1994). Another well-known class of non-linear filters is the class of rank order filters (Pitas and Venetsanopoulos, 1990). Soft morphological filters are a combination of morphological and weighted rank order filters (Koskinen, et al., 1991, Kuosmanen and Astola, 1995). They have been introduced to improve the behaviour of traditional morphological filters in noisy environments. The idea was to slightly relax the typical morphological definitions in such a way that a degree of robustness is achieved, while most of the desirable properties of typical morphological operations are maintained. Soft morphological filters are less sensitive to additive noise and to small variations in object shape than typical morphological filters. They can remove positive and negative impulse noise, preserving at the same time small details in images. Currently, Mathematical Morphology allows processing images to enhance fuzzy areas, segment objects, detect edges and analyze structures. The techniques developed for binary images are a major step forward in the application of this theory to gray level images. One of these techniques is based on fuzzy logic and on the theory of fuzzy sets.Fuzzy sets have proved to be strongly advantageous when representing in accuracies, not only regarding the spatial localization of objects in an image but also the membership of a certain pixel to a given class. Such inaccuracies are inherent to real images either because of the presence of indefinite limits between the structures or objects to be segmented within the image due to noisy acquisitions or directly because they are inherent to the image formation methods.
Resumo:
The focus of this article is to develop computationally efficient mathematical morphology operators on hypergraphs. To this aim we consider lattice structures on hypergraphs on which we build morphological operators. We develop a pair of dual adjunctions between the vertex set and the hyper edge set of a hypergraph H, by defining a vertex-hyperedge correspondence. This allows us to recover the classical notion of a dilation/erosion of a subset of vertices and to extend it to subhypergraphs of H. Afterward, we propose several new openings, closings, granulometries and alternate sequential filters acting (i) on the subsets of the vertex and hyperedge set of H and (ii) on the subhypergraphs of a hypergraph