967 resultados para Digital Image Analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Techniques of image capture have advanced along with the technologies of information and communication and unthinkable numbers of information available and imagery are stored in digital environments. The objective of this study is point out difficulties found in the construction of imagetic representations of digital resources using the instruments available for the treatment of descriptive information. The results we have the mapping of descriptive elements to digital images derived from analyzing of the schemes to guide the construction of descriptive records (AACR2R, ISBD, Graphic Materials, RDA, CDWA, CCO) and the conceptual model FRBRer. The result of this analysis conducted the conceptual model, Functional Requirements for Digital Imagetic Data RFDID to the development of more efficient ways to represent the use of imagery in order to make it available, accessible and recoverable from the data persistence descriptive, flexibility, consistency and integrity as essential requirements for the representation of the digital image.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The comet assay is a method of DNA damage analysis widely used to quantify oxidative damage, crosslinks of DNA, apoptosis and genotoxicity of chemicals substances as chemical, pharmaceuticals, agrochemicals products, among others. This technique is suitable to detect DNA strand breaks, alkali-labile sites and incomplete excision repair sites and is based on the migration of DNA fragments by microeletroforesis, DNA migrates for the anode forming a “tail”, and the formed image has the appearance of a comet. The slides can be stained with fluorescence or silver, having differences in the microscopy type used for the analysis and the possibility of storage of the slides, moreover, the first one is a stained-method with more difficulties of accomplishment. The image analysis can be performed by a visual way, however, there is a disadvantage as the subjectivity on the results, that can be minimized by an automated method of digital analysis. This process was studied in this report with the aim to perceive the validation of the digital analysis turning it a quantitative method with larger reproductibility, minimizing the variability and imprecision due to the subjective analysis. For this validation we selected 50 comets photographed in a standardized way and printed, afterwards, pictures were submitted to three experienced appraisers, who quantified them manually. Later, the images were processed by free software ImageJ 1.38x, printed and quantified manually by the same appraisers. The intraclass correlation was higher to comet measures after image processing. Following, an algorithm of automated digital analysis from the measures of the comet was developed; the values obtained were compared with those 12 estimated manually after the processing resulting high correlation among the measures. The use of image analysis systems increases ...(Complete abstract click electronic access below)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A CMOS/SOI circuit to decode PWM signals is presented as part of a body-implanted neurostimulator for visual prosthesis. Since encoded data is the sole input to the circuit, the decoding technique is based on a double-integration concept and does not require dc filtering. Nonoverlapping control phases are internally derived from the incoming pulses and a fast-settling comparator ensures good discrimination accuracy in the megahertz range. The circuit was integrated on a 2 mu m single-metal SOI fabrication process and has an effective area of 2mm(2) Typically, the measured resolution of encoding parameter a was better than 10% at 6MHz and V-DD=3.3V. Stand-by consumption is around 340 mu W. Pulses with frequencies up to 15MHz and alpha = 10% can be discriminated for V-DD spanning from 2.3V to 3.3V. Such an excellent immunity to V-DD deviations meets a design specification with respect to inherent coupling losses on transmitting data and power by means of a transcutaneous link.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)