940 resultados para Images - Computational methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of the maps obtained from remote sensing orbital images submitted to digital processing became fundamental to optimize conservation and monitoring actions of the coral reefs. However, the accuracy reached in the mapping of submerged areas is limited by variation of the water column that degrades the signal received by the orbital sensor and introduces errors in the final result of the classification. The limited capacity of the traditional methods based on conventional statistical techniques to solve the problems related to the inter-classes took the search of alternative strategies in the area of the Computational Intelligence. In this work an ensemble classifiers was built based on the combination of Support Vector Machines and Minimum Distance Classifier with the objective of classifying remotely sensed images of coral reefs ecosystem. The system is composed by three stages, through which the progressive refinement of the classification process happens. The patterns that received an ambiguous classification in a certain stage of the process were revalued in the subsequent stage. The prediction non ambiguous for all the data happened through the reduction or elimination of the false positive. The images were classified into five bottom-types: deep water; under-water corals; inter-tidal corals; algal and sandy bottom. The highest overall accuracy (89%) was obtained from SVM with polynomial kernel. The accuracy of the classified image was compared through the use of error matrix to the results obtained by the application of other classification methods based on a single classifier (neural network and the k-means algorithm). In the final, the comparison of results achieved demonstrated the potential of the ensemble classifiers as a tool of classification of images from submerged areas subject to the noise caused by atmospheric effects and the water column

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Simultaneous Localization and Mapping (SLAM - Simultaneous Localization and Mapping), a robot placed in an unknown location in any environment must be able to create a perspective of this environment (a map) and is situated in the same simultaneously, using only information captured by the robot s sensors and control signals known. Recently, driven by the advance of computing power, work in this area have proposed to use video camera as a sensor and it came so Visual SLAM. This has several approaches and the vast majority of them work basically extracting features of the environment, calculating the necessary correspondence and through these estimate the required parameters. This work presented a monocular visual SLAM system that uses direct image registration to calculate the image reprojection error and optimization methods that minimize this error and thus obtain the parameters for the robot pose and map of the environment directly from the pixels of the images. Thus the steps of extracting and matching features are not needed, enabling our system works well in environments where traditional approaches have difficulty. Moreover, when addressing the problem of SLAM as proposed in this work we avoid a very common problem in traditional approaches, known as error propagation. Worrying about the high computational cost of this approach have been tested several types of optimization methods in order to find a good balance between good estimates and processing time. The results presented in this work show the success of this system in different environments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two methods to evaluate the state transition matrix are implemented and analyzed to verify the computational cost and the accuracy of both methods. This evaluation represents one of the highest computational costs on the artificial satellite orbit determination task. The first method is an approximation of the Keplerian motion, providing an analytical solution which is then calculated numerically by solving Kepler's equation. The second one is a local numerical approximation that includes the effect of J(2). The analysis is performed comparing these two methods with a reference generated by a numerical integrator. For small intervals of time (1 to 10s) and when one needs more accuracy, it is recommended to use the second method, since the CPU time does not excessively overload the computer during the orbit determination procedure. For larger intervals of time and when one expects more stability on the calculation, it is recommended to use the first method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oral administration of solid dosage forms is usually preferred in drug therapy. Conventional imaging methods are essential tools to investigate the in vivo performance of these formulations. The non-invasive technique of ac biosusceptometry has been introduced as an alternative in studies focusing on gastrointestinal motility and, more recently, to evaluate the behaviour of magnetic tablets in vivo. The aim of this work was to employ a multisensor ac biosusceptometer system to obtain magnetic images of disintegration of tablets in vitro and in the human stomach. The results showed that the transition between the magnetic marker and the magnetic tracer characterized the onset of disintegration (t(50)) and occurred in a short time interval (1.1 +/- 0.4 min). The multisensor ac biosusceptometer was reliable to monitor and analyse the in vivo performance of magnetic tablets showing accuracy to quantify disintegration through the magnetic images and to characterize the profile of this process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote sensing is one technology of extreme importance, allowing capture of data from the Earth's surface that are used with various purposes, including, environmental monitoring, tracking usage of natural resources, geological prospecting and monitoring of disasters. One of the main applications of remote sensing is the generation of thematic maps and subsequent survey of areas from images generated by orbital or sub-orbital sensors. Pattern classification methods are used in the implementation of computational routines to automate this activity. Artificial neural networks present themselves as viable alternatives to traditional statistical classifiers, mainly for applications whose data show high dimensionality as those from hyperspectral sensors. This work main goal is to develop a classiffier based on neural networks radial basis function and Growing Neural Gas, which presents some advantages over using individual neural networks. The main idea is to use Growing Neural Gas's incremental characteristics to determine the radial basis function network's quantity and choice of centers in order to obtain a highly effective classiffier. To demonstrate the performance of the classiffier three studies case are presented along with the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual estimates are generally used for counts of horn flies, Haematobia irritans (L.) and play an important role as an instrument to quantify fly populations in scientific studies. In this study, horn fly counts were performed on 30 Nelore steers in the municipality of Aracatuba, SP Brazil, from January to December 1998. Flies were counted weekly by two methods: the estimate method whereby estimates of the number of flies on one side of the animal are obtained by visual observation, and the filming method whereby images of flies from both sides of the animal are recorded with a video camera. The tape was then played on a videotape recorder coupled to a television and the flies were counted on the screen. Both methods showed variations in horn fly population density during the period studied. However, significant differences (p < 0.05) were observed between the two methods with the filming method permitting the visualization of a larger number of flies than the estimate method. In addition, the filming method permitted safe and reliable counts hours after the images were taken, with the advantage that the tape can serve as an archive for random re-counts. (C) 2002 Elsevier B.V. B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The aim of this study was to evaluate a simple mnemonic rule (the RB-RB/LB-LB rule) for recording intra-oral radiographs with optimal projection for the control of dental implants.Methods: 30 third-year dental students received a short lesson in the RB-RB/LB-LB mnemonic rule. The rule is as follows: if right blur then raise beam (RB-RB), i.e. if implant threads are blurred at the right side of the implant, the X-ray beam direction must be raised towards the ceiling to obtain sharp threads on both implant sides; if left blur then lower beam (LB-LB), i.e. if implant threads are blurred at the left side of the implant, the X-ray beam direction must be lowered towards the floor to obtain sharp threads on both implant sides. Intra-oral radiographs of four screw-type implants placed with different inclination in a Frasaco upper or lower jaw dental model (Frasaco GmbH, Tettnang, Germany) were recorded. The students were unaware of the inclination of the implants and were instructed to re-expose each implant, implementing the mnemonic rule, until an image of the implant with acceptable quality (subjectively judged by the instructor) was obtained. Subsequently, each radiograph was blindly assessed with respect to sharpness of the implant threads and assigned to one of four quality categories: (1) perfect, (2) not perfect, but clinically acceptable, (3) not acceptable and (4) hopeless.Results: For all implants, from one non-perfect exposure to the following, a higher score was obtained in 64% of the cases, 28% received the same score and 8% obtained a lower score. Only a small variation was observed among exposures of implants with different inclination. on average, two exposures per implant (range: one to eight exposures) were needed to obtain a clinically acceptable image.Conclusion: The RB-RB/LB-LB mnemonic rule for recording intra-oral radiographs of dental implants with a correct projection was easy to implement by inexperienced examiners. Dentomaxillofacial Radiology (2012) 41, 298-304. doi: 10.1259/dmfr/20861598

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate the accuracy of Image Tool Software 3.0 (ITS 3.0) to detect marginal microleakage using the stereomicroscope as the validation criterion and ITS 3.0 as the tool under study.Materials and Methods: Class V cavities were prepared at the cementoenamel junction of 61 bovine incisors, and 53 halves of them were used. Using the stereomicroscope, microleakage was classified dichotomously: presence or absence. Next, ITS 3.0 was used to obtain measurements of the microleakage, so that 0.75 was taken as the cut-off point, and values equal to or greater than 0.75 indicated its presence, while values between 0.00 and 0.75 indicated its absence. Sensitivity and specificity were calculated by point and given as 95% confidence interval (95% CI).Results: The accuracy of the ITS 3.0 was verified with a sensitivity of 0.95 (95% CI: 0.89 to 1.00) and a specificity of 0.92 (95% CI: 0.84 to 0.99).Conclusion: Digital diagnosis of marginal microleakage using ITS 3.0 was sensitive and specific.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the real contribution of light on the acceleration of the chemical reaction for the dental bleaching is under incredulity, mostly because the real mechanisms of its contribution still are obscure. Objectives: Determine the influence of pigment of three colored bleaching gels in the light distribution and absorption in the teeth, to accomplish that, we have used in this experiment bovine teeth and three colored bleaching gels. It is well Known that the dark molecules absorb light and increase the local temperature upraising the bleaching rate, these molecules are located in the interface between the enamel and dentin. Methods: This study was realized using an argon laser with 455nm with 150mW of intensity and a LED with the same characteristics, three colored gels (green, blue and red) and to realize the capture of the digital images it was used a CCD camera connected to a PC. The images were processed in a mathematical environment (MATHLAB, R12 (R)). Results: The obtained results show that the color of the bleaching gel influences significantly the absorption of light in the specific sites of the teeth. Conclusions: This poor absorption can be one of the major factors involved with the incredulity of the light contribution on the process that can be observed in the literature nowadays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New formularizations, techniques and devices have become the dental whitening most safe and with better results. Although this, the verification of the levels whitening is being continued for visual comparison, that is an empirical, subjective method, subject to errors and dependent of the individual interpretation. Normally the result of the whitening is express for the amplitude of displacement between the initial and the final color, being take like reference the tonalities of a scale of color commanded of darkest for more clearly. Although to be the most used scale, the ordinance of the Vita Classical (R) - Vita, according to recommendations of the manufacturer, reveals inadequate for the evaluation of the whitening. From digital images and of algorithm OER (ordinance of the reference scale), especially developed for the ScanWhite (C), the ordinance of the tonalities of the scale Vita Classical (R) was made. For such, the values of the canals of color R, G, and B of medium part average of the crowns was adopted as reference for evaluation. The images had been taken with the camera Sony Cybershoot DSC F828. The results of the computational ordinance had been compared with the sequence proposal for the manufacturer and with the earned one for the visual evaluation, carried through by 10 volunteers, under standardized conditions of illumination. It statistics analyzes demonstrated significant differences between the ordinances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes an optical device for the simultaneous recording of shadowgrams and schlieren images, and some results are presented concerning its applications to the study of plasma assisted flow control in airfoil models. This approach offers many advantages in comparison to other methods, specially because the use of tracer particles (like smoke in wind tunnels) is not required for the experiments, thus avoiding contaminations in the electric discharges or air flows. Besides, while schlieren images reveal the refractive index gradients in the area of study, shadowgrams detect the second order spatial derivatives of the refractive indexes. Therefore, the simultaneous recording of these different images may give interesting information about the phenomena under study. In this paper, these images were used to confirm the existence of vortex structures in the flow induced by corona discharges on airfoil models. These structures are a possible explanation for the effects of drag reduction and lift force increasing, which have been reported in experiments of plasma assisted Aerodynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes an optical device for the simultaneous recording of shadowgrams and schlieren images, and some results are presented concerning its application to the study of plasma assisted flow control in airfoil models. This approach offers many advantages in comparison to other methods, specially because the use of tracer particles (like smoke in wind tunnels) is not required for the experiments, thus avoiding contaminations in the electric discharges or air flows. Besides, while schlieren images reveal the refractive index gradients in the area of study, shadowgrams detect the second order spatial derivatives of the refractive indexes. Therefore, the simultaneous recording of these different images may give interesting information about the phenomena under study.