902 resultados para Density-based Scanning Algorithm
Resumo:
Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach
Resumo:
Coating and filler pigments have strong influence to the properties of the paper. Filler content can be even over 30 % and pigment content in coating is about 85-95 weight percent. The physical and chemical properties of the pigments are different and the knowledge of these properties is important for optimising of optical and printing properties of the paper. The size and shape of pigment particles can be measured by different analysers which can be based on sedimentation, laser diffraction, changes in electric field etc. In this master's thesis was researched particle properties especially by scanning electron microscope (SEM) and image analysis programs. Research included nine pigments with different particle size and shape. Pigments were analysed by two image analysis programs (INCA Feature and Poikki), Coulter LS230 (laser diffraction) and SediGraph 5100 (sedimentation). The results were compared to perceive the effect of particle shape to the performance of the analysers. Only image analysis programs gave parameters of the particle shape. One part of research was also the sample preparation for SEM. Individual particles should be separated and distinct in ideal sample. Analysing methods gave different results but results from image analysis programs corresponded even to sedimentation or to laser diffraction depending on the particle shape. Detailed analysis of the particle shape required high magnification in SEM, but measured parameters described very well the shape of the particles. Large particles (ecd~1 µm) could be used also in 3D-modelling which enabled the measurement of the thickness of the particles. Scanning electron microscope and image analysis programs were effective and multifunctional tools for particle analyses. Development and experience will devise the usability of analysing method in routine use.
Resumo:
In this paper the authors propose a new closed contour descriptor that could be seen as a Feature Extractor of closed contours based on the Discrete Hartley Transform (DHT), its main characteristic is that uses only half of the coefficients required by Elliptical Fourier Descriptors (EFD) to obtain a contour approximation with similar error measure. The proposed closed contour descriptor provides an excellent capability of information compression useful for a great number of AI applications. Moreover it can provide scale, position and rotation invariance, and last but not least it has the advantage that both the parameterization and the reconstructed shape from the compressed set can be computed very efficiently by the fast Discrete Hartley Transform (DHT) algorithm. This Feature Extractor could be useful when the application claims for reversible features and when the user needs and easy measure of the quality for a given level of compression, scalable from low to very high quality.
Resumo:
Peer-reviewed
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
Modeling ecological niches of species is a promising approach for predicting the geographic potential of invasive species in new environments. Argentine ants (Linepithema humile) rank among the most successful invasive species: native to South America, they have invaded broad areas worldwide. Despite their widespread success, little is known about what makes an area susceptible - or not - to invasion. Here, we use a genetic algorithm approach to ecological niche modeling based on high-resolution remote-sensing data to examine the roles of niche similarity and difference in predicting invasions by this species. Our comparisons support a picture of general conservatism of the species' ecological characteristics, in spite of distinct geographic and community contexts
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
In this work we present the formulas for the calculation of exact three-center electron sharing indices (3c-ESI) and introduce two new approximate expressions for correlated wave functions. The 3c-ESI uses the third-order density, the diagonal of the third-order reduced density matrix, but the approximations suggested in this work only involve natural orbitals and occupancies. In addition, the first calculations of 3c-ESI using Valdemoro's, Nakatsuji's and Mazziotti's approximation for the third-order reduced density matrix are also presented for comparison. Our results on a test set of molecules, including 32 3c-ESI values, prove that the new approximation based on the cubic root of natural occupancies performs the best, yielding absolute errors below 0.07 and an average absolute error of 0.015. Furthemore, this approximation seems to be rather insensitive to the amount of electron correlation present in the system. This newly developed methodology provides a computational inexpensive method to calculate 3c-ESI from correlated wave functions and opens new avenues to approximate high-order reduced density matrices in other contexts, such as the contracted Schrödinger equation and the anti-Hermitian contracted Schrödinger equation
Resumo:
Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.
Resumo:
Because of technical principles, samples to be observed with electron microscopy need to be fixed in a chemical process and exposed to vacuum conditions that can produce some changes in the morphology of the specimen. The aim of this work was to obtain high-resolution images of the fresh articular cartilage surface with an environmental scanning electron microscope (ESEM), which is an instrument that permits examination of biological specimens without fixation methods in a 10 Torr chamber pressure, thus minimizing the risk of creating artifacts in the structure. Samples from weight-bearing areas of femoral condyles of New Zealand white rabbits were collected and photographed using an ESEM. Images were analyzed using a categorization based in the Jurvelin classification system modified by Hong and Henderson. Appearance of the observed elevations and depressions as described in the classification were observed, but no fractures or splits of cartilage surface, thought to be artifacts, were detected. The ESEM is a useful tool to obtain images of fresh articular cartilage surface appearance without either employing fixation methods or exposing the specimen to extreme vacuum conditions, reducing the risk of introducing artifacts within the specimen. For all these reasons it could become a useful tool for quality control of the preservation process of osteochondral allografting in a bank of musculoskeletal tissues.
Resumo:
The Differential Scanning Calorimetry (DSC) was used to study the thermal behavior of hair samples and to verify the possibility of identifying an individual based on DSC curves from a data bank. Hair samples of students and officials from Instituto de Química de Araraquara, UNESP were obtained to build up a data bank. Thus to sought an individual, under incognito participant of this data bank, was identified using DSC curves.
Resumo:
In the modern warfare there is an active development of a new trend connected with a robotic warfare. One of the critical elements of robotics warfare systems is an automatic target recognition system, allowing to recognize objects, based on the data received from sensors. This work considers aspects of optical realization of such a system by means of NIR target scanning at fixed wavelengths. An algorithm was designed, an experimental setup was built and samples of various modern gear and apparel materials were tested. For pattern testing the samples of actively arm engaged armies camouflages were chosen. Tests were performed both in clear atmosphere and in the artificial extremely humid and hot atmosphere to simulate field conditions.
Resumo:
Quantum Chemical calculations for group 14 elements of Periodic Table (C, Si, Ge, Sn, Pb) and their functional groups have been carried out using Density Functional Theory (DFT) based reactivity descriptors such as group electronegativities, hardness and softness. DFT calculations were performed for a large series of tetracoordinated Sn compounds of the CH3SnRR'X type, where X is a halogen and R and R' are alkyl, halogenated alkyl, alkoxy, or alkyl thio groups. The results were interpreted in terms of calculated electronegativity and hardness of the SnRR'X groups, applying a methodology previously developed by Geerlings and coworkers (J. Phys. Chem. 1993, 97, 1826). These calculations allowed to see the regularities concerning the influence of the nature of organic groups RR' and inorganic group X on electronegativities and hardness of the SnRR'X groups; in this case, it was found a very good correlation between the electronegativity of the fragment and experimental 119Sn chemical shifts, a property that sensitively reflects the change in the valence electronic structure of molecules. This work was complemented with the study of some compounds of the EX and ER types, where E= C, Si, Ge, Sn and R= CH3, H, which was performed to study the influence that the central atom has on the electronegativity and hardness of molecules, or whether these properties are mainly affected for the type of ligand bound to the central atom. All these calculations were performed using the B3PW91 functional together with the 6-311++G** basis set level for H, C, Si, Ge, F, Cl and Br atoms and the 3-21G for Sn and I atoms.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.