37 resultados para Image recognition and processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The methodology for fracture analysis of polymeric composites with scanning electron microscopes (SEM) is still under discussion. Many authors prefer to use sputter coating with a conductive material instead of applying low-voltage (LV) or variable-pressure (VP) methods, which preserves the original surfaces. The present work examines the effects of sputter coating with 25 nm of gold on the topography of carbon-epoxy composites fracture surfaces, using an atomic force microscope. Also, the influence of SEM imaging parameters on fractal measurements is evaluated for the VP-SEM and LV-SEM methods. It was observed that topographic measurements were not significantly affected by the gold coating at tested scale. Moreover, changes on SEM setup leads to nonlinear outcome on texture parameters, such as fractal dimension and entropy values. For VP-SEM or LV-SEM, fractal dimension and entropy values did not present any evident relation with image quality parameters, but the resolution must be optimized with imaging setup, accompanied by charge neutralization. © Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The objective of this study was to assess the use of analgesics, describe the attitudes of Brazilian veterinarians towards pain relief in horses and cattle and evaluate the differences due to gender, year of graduation and type of practice. Study design: Prospective survey. Methods: Questionnaires were sent to 1000 large animal veterinarians by mail, internet and delivered in person during national meetings. The survey investigated the attitudes of Brazilian veterinarians to the recognition and treatment of pain in large animals and consisted of sections asking about demographic data, use of analgesic drugs, attitudes to pain relief and to the assessment of pain. Descriptive statistics were used to analyze frequencies. Simple post hoc comparisons were performed using the chi-square test. Results: Eight hundred questionnaires were collected, but 87 were discarded because they were incomplete or blank. The opioid of choice for use in large animals was butorphanol (43.4%) followed by tramadol (39%). Flunixin (83.2%) and ketoprofen (67.6%) were the most frequently used NSAIDs by Brazilian veterinarians. Respondents indicated that horses received preoperative analgesics for laparotomy more frequently (72.9%) than cattle (58.5%). The most frequently administered preoperative drugs for laparotomy in horses were flunixin (38.4%) and xylazine (23.6%), whereas the preoperative drugs for the same surgical procedure in cattle were xylazine (31.8%) and the local administration of lidocaine (48%). Fracture repair was considered the most painful surgical procedure for both species. Most veterinarians (84.1%) believed that their knowledge in this area was not adequate. Conclusions and clinical relevance: Although these Brazilian veterinarians thought that their knowledge on recognition and treatment of pain was not adequate, the use of analgesic in large animals was similar in Brazil to that reported in other countries. © 2013 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesia and Analgesia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, a semiautomatic method for road extraction in object space is proposed that combines a stereoscopic pair of low-resolution aerial images with a digital terrain model (DTM) structured as a triangulated irregular network (TIN). First, we formulate an objective function in the object space to allow the modeling of roads in 3-D. In this model, the TIN-based DTM allows the search for the optimal polyline to be restricted along a narrow band that is overlaid upon it. Finally, the optimal polyline for each road is obtained by optimizing the objective function using the dynamic programming optimization algorithm. A few seed points need to be supplied by an operator. To evaluate the performance of the proposed method, a set of experiments was designed using two stereoscopic pairs of low-resolution aerial images and a TIN-based DTM with an average resolution of 1 m. The experimental results showed that the proposed method worked properly, even when faced with anomalies along roads, such as obstructions caused by shadows and trees.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some photosensitizers (PSs) used for PACT (Antimicrobial Photodynamic Therapy) show an affinity for bacterial walls and can be photo-activated to cause the desired damage. However, on dentine bacterias may be less susceptible to PACT as a result of limited penetration of the PS. The aim of this study was to evaluate the diffusion of one PS based on hematoporphyrin on dentine structures. Twelve bovine incisors were used. Class III cavities (3 x 3 x 1 mm) were prepared on the mesial or distal surfaces using a diamond bur. Photogem (R) solution at 1 mg/mL (10 uL for each cavity) was used. The experimental Groups were divided according to thickness of dentine remaining and etched or no-etched before the PS application. The fluorescence excitation source was a VelScope (R) system. For image capture a scientific CCD color camera PixelFly (R) was coupled to VelScope. For image acquisition and processing, a computational routine was developed at Matlab (R). Fick's Law was used to obtain the average diffusion coefficient of PS. Differences were found between all Groups. The longitudinal temporal diffusion was influenced by the different times, thickness and acid etching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A digital image processing and analysis method has been developed to classify shape and evaluate size and morphology parameters of corrosion pits. This method seems to be effective to analyze surfaces with low or high degree of pitting formation. Theoretical geometry data have been compared against experimental data obtained for titanium and aluminum alloys subjected to different corrosion tests. (C) 2002 Elsevier B.V. B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The applications of Automatic Vowel Recognition (AVR), which is a sub-part of fundamental importance in most of the speech processing systems, vary from automatic interpretation of spoken language to biometrics. State-of-the-art systems for AVR are based on traditional machine learning models such as Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs), however, such classifiers can not deal with efficiency and effectiveness at the same time, existing a gap to be explored when real-time processing is required. In this work, we present an algorithm for AVR based on the Optimum-Path Forest (OPF), which is an emergent pattern recognition technique recently introduced in literature. Adopting a supervised training procedure and using speech tags from two public datasets, we observed that OPF has outperformed ANNs, SVMs, plus other classifiers, in terms of training time and accuracy. ©2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most face recognition approaches require a prior training where a given distribution of faces is assumed to further predict the identity of test faces. Such an approach may experience difficulty in identifying faces belonging to distributions different from the one provided during the training. A face recognition technique that performs well regardless of training is, therefore, interesting to consider as a basis of more sophisticated methods. In this work, the Census Transform is applied to describe the faces. Based on a scanning window which extracts local histograms of Census Features, we present a method that directly matches face samples. With this simple technique, 97.2% of the faces in the FERET fa/fb test were correctly recognized. Despite being an easy test set, we have found no other approaches in literature regarding straight comparisons of faces with such a performance. Also, a window for further improvement is presented. Among other techniques, we demonstrate how the use of SVMs over the Census Histogram representation can increase the recognition performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the construction of a homogeneous phantom to be used in simulating the scattering and absorption of X-rays by a standard patient chest and skull when irradiated laterally. This phantom consisted of Incite and aluminium plates with their thickness determined by a tomographic exploratory method applied to the anthropomorphic phantom. Using this phantom, an optimized radiographic technique was established for chest and skull of standard sized patient in lateral view. Images generated with this optimized technique demonstrated improved image quality and reduced radiation doses. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we deal with the problem of feature selection by introducing a new approach based on Gravitational Search Algorithm (GSA). The proposed algorithm combines the optimization behavior of GSA together with the speed of Optimum-Path Forest (OPF) classifier in order to provide a fast and accurate framework for feature selection. Experiments on datasets obtained from a wide range of applications, such as vowel recognition, image classification and fraud detection in power distribution systems are conducted in order to asses the robustness of the proposed technique against Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and a Particle Swarm Optimization (PSO)-based algorithm for feature selection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)