908 resultados para Pre-processing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present a novel analytical formulation for the coupled partial differential equations governing electrostatically actuated constrained elastic structures of inhomogeneous material composition. We also present a computationally efficient numerical framework for solving the coupled equations over a reference domain with a fixed finite-element mesh. This serves two purposes: (i) a series of problems with varying geometries and piece-wise homogeneous and/or inhomogeneous material distribution can be solved with a single pre-processing step, (ii) topology optimization methods can be easily implemented by interpolating the material at each point in the reference domain from a void to a dielectric or a conductor. This is attained by considering the steady-state electrical current conduction equation with a `leaky capacitor' model instead of the usual electrostatic equation. This formulation is amenable for both static and transient problems in the elastic domain coupled with the quasi-electrostatic electric field. The procedure is numerically implemented on the COMSOL Multiphysics (R) platform using the weak variational form of the governing equations. Examples have been presented to show the accuracy and versatility of the scheme. The accuracy of the scheme is validated for the special case of piece-wise homogeneous material in the limit of the leaky-capacitor model approaching the ideal case.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Instruction scheduling with an automaton-based resource conflict model is well-established for normal scheduling. Such models have been generalized to software pipelining in the modulo-scheduling framework. One weakness with existing methods is that a distinct automaton must be constructed for each combination of a reservation table and initiation interval. In this work, we present a different approach to model conflicts. We construct one automaton for each reservation table which acts as a compact encoding of all the conflict automata for this table, which can be recovered for use in modulo-scheduling. The basic premise of the construction is to move away from the Proebsting-Fraser model of conflict automaton to the Muller model of automaton modelling issue sequences. The latter turns out to be useful and efficient in this situation. Having constructed this automaton, we show how to improve the estimate of resource constrained initiation interval. Such a bound is always better than the average-use estimate. We show that our bound is safe: it is always lower than the true initiation interval. This use of the automaton is orthogonal to its use in modulo-scheduling. Once we generate the required information during pre-processing, we can compute the lower bound for a program without any further reference to the automaton.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we discuss the issues related to word recognition in born-digital word images. We introduce a novel method of power-law transformation on the word image for binarization. We show the improvement in image binarization and the consequent increase in the recognition performance of OCR engine on the word image. The optimal value of gamma for a word image is automatically chosen by our algorithm with fixed stroke width threshold. We have exhaustively experimented our algorithm by varying the gamma and stroke width threshold value. By varying the gamma value, we found that our algorithm performed better than the results reported in the literature. On the ICDAR Robust Reading Systems Challenge-1: Word Recognition Task on born digital dataset, as compared to the recognition rate of 61.5% achieved by TH-OCR after suitable pre-processing by Yang et. al. and 63.4% by ABBYY Fine Reader (used as baseline by the competition organizers without any preprocessing), we achieved 82.9% using Omnipage OCR applied on the images after being processed by our algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a GPU implementation of normalized cuts for road extraction problem using panchromatic satellite imagery. The roads have been extracted in three stages namely pre-processing, image segmentation and post-processing. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, vegetation,. and fallow regions). The road regions are then extracted using the normalized cuts algorithm. Normalized cuts algorithm is a graph-based partitioning `approach whose focus lies in extracting the global impression (perceptual grouping) of an image rather than local features. For the segmented image, post-processing is carried out using morphological operations - erosion and dilation. Finally, the road extracted image is overlaid on the original image. Here, a GPGPU (General Purpose Graphical Processing Unit) approach has been adopted to implement the same algorithm on the GPU for fast processing. A performance comparison of this proposed GPU implementation of normalized cuts algorithm with the earlier algorithm (CPU implementation) is presented. From the results, we conclude that the computational improvement in terms of time as the size of image increases for the proposed GPU implementation of normalized cuts. Also, a qualitative and quantitative assessment of the segmentation results has been projected.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a quantum dot based DNA nanosensor specifically targeting the cleavage step in the reaction cycle of the essential DNA-modifying enzyme, mycobacterial topoisomerase I. The design takes advantages of the unique photophysical properties of quantum dots to generate visible fluorescence recovery upon specific cleavage by mycobacterial topoisomerase I. This report, for the first time, demonstrates the possibility to quantify the cleavage activity of the mycobacterial enzyme without the pre-processing sample purification or post-processing signal amplification. The cleavage induced signal response has also proven reliable in biological matrices, such as whole cell extracts prepared from Escherichia coli and human Caco-2 cells. It is expected that the assay may contribute to the clinical diagnostics of bacterial diseases, as well as the evaluation of treatment outcomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Imaging flow cytometry is an emerging technology that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy. It allows high-throughput imaging of cells with good spatial resolution, while they are in flow. This paper proposes a general framework for the processing/classification of cells imaged using imaging flow cytometer. Each cell is localized by finding an accurate cell contour. Then, features reflecting cell size, circularity and complexity are extracted for the classification using SVM. Unlike the conventional iterative, semi-automatic segmentation algorithms such as active contour, we propose a noniterative, fully automatic graph-based cell localization. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using custom fabricated cost-effective microfluidics-based imaging flow cytometer. The proposed system is a significant development in the direction of building a cost-effective cell analysis platform that would facilitate affordable mass screening camps looking cellular morphology for disease diagnosis. Lay description In this article, we propose a novel framework for processing the raw data generated using microfluidics based imaging flow cytometers. Microfluidics microscopy or microfluidics based imaging flow cytometry (mIFC) is a recent microscopy paradigm, that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy, which allows us imaging cells while they are in flow. In comparison to the conventional slide-based imaging systems, mIFC is a nascent technology enabling high throughput imaging of cells and is yet to take the form of a clinical diagnostic tool. The proposed framework process the raw data generated by the mIFC systems. The framework incorporates several steps: beginning from pre-processing of the raw video frames to enhance the contents of the cell, localising the cell by a novel, fully automatic, non-iterative graph based algorithm, extraction of different quantitative morphological parameters and subsequent classification of cells. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using cost-effective microfluidics based imaging flow cytometer. The cell lines of HL60, K562 and MOLT were obtained from ATCC (American Type Culture Collection) and are separately cultured in the lab. Thus, each culture contains cells from its own category alone and thereby provides the ground truth. Each cell is localised by finding a closed cell contour by defining a directed, weighted graph from the Canny edge images of the cell such that the closed contour lies along the shortest weighted path surrounding the centroid of the cell from a starting point on a good curve segment to an immediate endpoint. Once the cell is localised, morphological features reflecting size, shape and complexity of the cells are extracted and used to develop a support vector machine based classification system. We could classify the cell-lines with good accuracy and the results were quite consistent across different cross validation experiments. We hope that imaging flow cytometers equipped with the proposed framework for image processing would enable cost-effective, automated and reliable disease screening in over-loaded facilities, which cannot afford to hire skilled personnel in large numbers. Such platforms would potentially facilitate screening camps in low income group countries; thereby transforming the current health care paradigms by enabling rapid, automated diagnosis for diseases like cancer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The bilateral filter is known to be quite effective in denoising images corrupted with small dosages of additive Gaussian noise. The denoising performance of the filter, however, is known to degrade quickly with the increase in noise level. Several adaptations of the filter have been proposed in the literature to address this shortcoming, but often at a substantial computational overhead. In this paper, we report a simple pre-processing step that can substantially improve the denoising performance of the bilateral filter, at almost no additional cost. The modified filter is designed to be robust at large noise levels, and often tends to perform poorly below a certain noise threshold. To get the best of the original and the modified filter, we propose to combine them in a weighted fashion, where the weights are chosen to minimize (a surrogate of) the oracle mean-squared-error (MSE). The optimally-weighted filter is thus guaranteed to perform better than either of the component filters in terms of the MSE, at all noise levels. We also provide a fast algorithm for the weighted filtering. Visual and quantitative denoising results on standard test images are reported which demonstrate that the improvement over the original filter is significant both visually and in terms of PSNR. Moreover, the denoising performance of the optimally-weighted bilateral filter is competitive with the computation-intensive non-local means filter.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we introduce a weighted complex networks model to investigate and recognize structures of patterns. The regular treating in pattern recognition models is to describe each pattern as a high-dimensional vector which however is insufficient to express the structural information. Thus, a number of methods are developed to extract the structural information, such as different feature extraction algorithms used in pre-processing steps, or the local receptive fields in convolutional networks. In our model, each pattern is attributed to a weighted complex network, whose topology represents the structure of that pattern. Based upon the training samples, we get several prototypal complex networks which could stand for the general structural characteristics of patterns in different categories. We use these prototypal networks to recognize the unknown patterns. It is an attempt to use complex networks in pattern recognition, and our result shows the potential for real-world pattern recognition. A spatial parameter is introduced to get the optimal recognition accuracy, and it remains constant insensitive to the amount of training samples. We have discussed the interesting properties of the prototypal networks. An approximate linear relation is found between the strength and color of vertexes, in which we could compare the structural difference between each category. We have visualized these prototypal networks to show that their topology indeed represents the common characteristics of patterns. We have also shown that the asymmetric strength distribution in these prototypal networks brings high robustness for recognition. Our study may cast a light on understanding the mechanism of the biologic neuronal systems in object recognition as well.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60- mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality–lost values imputation, probes quality, data smoothing and intraclass variability filtering–the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A padronização para a fabricação de instrumentos endodônticos em aço inoxidável contribuiu para o desenvolvimento de novos aspectos geométricos. Surgiram propostas de alterações no desenho da haste helicoidal, da seção reta transversal, da ponta, da conicidade e do diâmetro na extremidade (D0). Concomitantemente, o emprego de ligas em Níquel-Titânio possibilitou a produção de instrumentos acionados a motor, largamente empregados hoje. A cada ano a indústria lança instrumentos com diversas modificações, sem, contudo, disponibilizar informações suficientes quanto às implicações clínicas destas modificações. Existe um crescente interesse no estudo dos diferentes aspectos geométricos e sua precisa metrologia. Tradicionalmente, a aferição de aspectos geométricos de instrumentos endodônticos é realizada visualmente através de microscopia ótica. Entretanto, esse procedimento visual é lento e subjetivo. Este trabalho propõe um novo método para a metrologia de instrumentos endodônticos baseado no microscópio eletrônico de varredura e na análise digital das imagens. A profundidade de campo do MEV permite obter a imagem de todo o relevo do instrumento endodôntico a uma distância de trabalho constante. Além disso, as imagens obtidas pelo detector de elétrons retro-espalhados possuem menos artefatos e sombras, tornando a obtenção e análise das imagens mais fáceis. Adicionalmente a análise das imagens permite formas de mensuração mais eficientes, com maior velocidade e qualidade. Um porta-amostras específico foi adaptado para obtenção das imagens dos instrumentos endodônticos. Ele é composto de um conector elétrico múltiplo com terminais parafusados de 12 pólos com 4 mm de diâmetro, numa base de alumínio coberta por discos de ouro. Os nichos do conector (terminais fêmeas) têm diâmetro apropriado (2,5 mm) para o encaixe dos instrumentos endodônticos. Outrossim, o posicionamento ordenado dos referidos instrumentos no conector elétrico permite a aquisição automatizada das imagens no MEV. Os alvos de ouro produzem, nas imagens de elétrons retro-espalhados, melhor contraste de número atômico entre o fundo em ouro e os instrumentos. No porta-amostras desenvolvido, os discos que compõem o fundo em ouro são na verdade, alvos do aparelho metalizador, comumente encontrados em laboratórios de MEV. Para cada instrumento, imagens de quatro a seis campos adjacentes de 100X de aumento são automaticamente obtidas para cobrir todo o comprimento do instrumento com a magnificação e resolução requeridas (3,12 m/pixel). As imagens obtidas são processadas e analisadas pelos programas Axiovision e KS400. Primeiro elas são dispostas num campo único estendido de cada instrumento por um procedimento de alinhamento semi-automático baseado na inter-relação com o Axiovision. Então a imagem de cada instrumento passa por uma rotina automatizada de análise de imagens no KS400. A rotina segue uma sequência padrão: pré-processamento, segmentação, pós-processamento e mensuração dos aspectos geométricos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este estudo teve como objetivo principal utilizar os teores de elementos-traço e análise isotópica de Pb (204Pb,206Pb,207Pb,208Pb) como ferramentas na caracterização da poluição da Baía de Sepetiba-RJ. As coletas de sedimento superficiais de fundo foram realizadas em três campanhas, em novembro de 2010, no setor oeste da Baía de Sepetiba RJ. A malha amostral é composta por 66 amostras (BSEP 001 a BSEP 066) coletadas com busca-fundo Van Veen. O pré-processamento das amostras ocorreu no Laboratório Geológico de Preparação de Amostras do Departamento de Geologia da Universidade do Estado do Rio de Janeiro. A digestão parcial das amostras de sedimento (< 0.072 mm) para obtenção do teor parcial dos elementos-traço (Ag, As, Cd, Co, Cr, Cu, Li, Mn, Ni, Pb, Sr, U, Zn) e de isótopo de Pb (lixiviação) foi executada no Laboratório de Geoquímica Analítica do Instituto de Geociências da UNICAMP e a leitura foi executada através do ICP-MS. Já as análises das concentrações totais dos elementos-traço (inclusive, Hg) e de isótopos de Pb (dissolução total) foram realizadas no laboratório ACTLABS (Ontário-Canadá) através do ICP Varian Vista. As leituras isotópicas foram feitas somente nas amostras que apresentaram concentrações parciais de Pb, acima de 0,5 g/g, totalizando 21 estações. Pôde-se constatar a existência de um enriquecimento de elementos-traço no setor oeste da Baía de Sepetiba. As médias dos teores totais de Ag (0,4 g/g), Cd (0,76 g/g), Cu (62,59 g/g), Li (43,29 g/g), Ni (16,65 g/g), Pb (20,08 g/g), Sr (389,64 g/g) e Zn (184,82 g/g) excederam os limites recomendados ou valores naturais. Isto pode ser reflexo da influência antrópica na região, principalmente relacionada à atividade de dragagem e à permanência dos resíduos de minério da desativada companhia de minério Ingá, na Ilha da Madeira. Os mapas de distribuição da concentração dos metais-traço destacaram a presença de vários sítios de deposição ao longo do setor oeste da baía de Sepetiba, com destaque para a região entre a porção centro oeste da Ilha de Itacuruça e o continente; Saco da Marambaia e Ponta da Pombeba; e porção oeste da Ponta da Marambaia. As razões isotópicas 206Pb/207Pb da área estudada variaram entre 1,163 a 1,259 para dissolução total e 1,1749-1,1877 para técnica de lixiviação, valores considerados como assinaturas de sedimentos pós-industriais ou comparados à assinatura de gasolina. Ainda sobre a técnica de lixiviação, destaca-se que os sedimentos superficiais do setor oeste (206Pb/207Pb: 1,1789) da baía de Sepetiba apresentaram uma assinatura uniforme e menos radiogênica do que setor leste (206Pb/207Pb: 1,2373 e 1,2110) desta baía. Através da assinatura isotópica de Pb encontrada nesta região é possível destacar a pouca contribuição das águas oceânicas para esse sistema, entretanto, a circulação interna intensa das águas da baía permite a homogeneização destas. O emprego destes tipos de ferramentas no monitoramento ambiental da área mostrou-se bastante eficiente, sendo importante a continuidade desta abordagem de pesquisa a fim de auxiliar na implementação de um plano de manejo local.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A discriminação de fases que são praticamente indistinguíveis ao microscópio ótico de luz refletida ou ao microscópio eletrônico de varredura (MEV) é um dos problemas clássicos da microscopia de minérios. Com o objetivo de resolver este problema vem sendo recentemente empregada a técnica de microscopia colocalizada, que consiste na junção de duas modalidades de microscopia, microscopia ótica e microscopia eletrônica de varredura. O objetivo da técnica é fornecer uma imagem de microscopia multimodal, tornando possível a identificação, em amostras de minerais, de fases que não seriam distinguíveis com o uso de uma única modalidade, superando assim as limitações individuais dos dois sistemas. O método de registro até então disponível na literatura para a fusão das imagens de microscopia ótica e de microscopia eletrônica de varredura é um procedimento trabalhoso e extremamente dependente da interação do operador, uma vez que envolve a calibração do sistema com uma malha padrão a cada rotina de aquisição de imagens. Por esse motivo a técnica existente não é prática. Este trabalho propõe uma metodologia para automatizar o processo de registro de imagens de microscopia ótica e de microscopia eletrônica de varredura de maneira a aperfeiçoar e simplificar o uso da técnica de microscopia colocalizada. O método proposto pode ser subdividido em dois procedimentos: obtenção da transformação e registro das imagens com uso desta transformação. A obtenção da transformação envolve, primeiramente, o pré-processamento dos pares de forma a executar um registro grosseiro entre as imagens de cada par. Em seguida, são obtidos pontos homólogos, nas imagens óticas e de MEV. Para tal, foram utilizados dois métodos, o primeiro desenvolvido com base no algoritmo SIFT e o segundo definido a partir da varredura pelo máximo valor do coeficiente de correlação. Na etapa seguinte é calculada a transformação. Foram empregadas duas abordagens distintas: a média ponderada local (LWM) e os mínimos quadrados ponderados com polinômios ortogonais (MQPPO). O LWM recebe como entradas os chamados pseudo-homólogos, pontos que são forçadamente distribuídos de forma regular na imagem de referência, e que revelam, na imagem a ser registrada, os deslocamentos locais relativos entre as imagens. Tais pseudo-homólogos podem ser obtidos tanto pelo SIFT como pelo método do coeficiente de correlação. Por outro lado, o MQPPO recebe um conjunto de pontos com a distribuição natural. A análise dos registro de imagens obtidos empregou como métrica o valor da correlação entre as imagens obtidas. Observou-se que com o uso das variantes propostas SIFT-LWM e SIFT-Correlação foram obtidos resultados ligeiramente superiores aos do método com a malha padrão e LWM. Assim, a proposta, além de reduzir drasticamente a intervenção do operador, ainda possibilitou resultados mais precisos. Por outro lado, o método baseado na transformação fornecida pelos mínimos quadrados ponderados com polinômios ortogonais mostrou resultados inferiores aos produzidos pelo método que faz uso da malha padrão.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article, we detail the methodology developed to construct arbitrarily high order schemes - linear and WENO - on 3D mixed-element unstructured meshes made up of general convex polyhedral elements. The approach is tailored specifically for the solution of scalar level set equations for application to incompressible two-phase flow problems. The construction of WENO schemes on 3D unstructured meshes is notoriously difficult, as it involves a much higher level of complexity than 2D approaches. This due to the multiplicity of geometrical considerations introduced by the extra dimension, especially on mixed-element meshes. Therefore, we have specifically developed a number of algorithms to handle mixed-element meshes composed of convex polyhedra with convex polygonal faces. The contribution of this work concerns several areas of interest: the formulation of an improved methodology in 3D, the minimisation of computational runtime in the implementation through the maximum use of pre-processing operations, the generation of novel methods to handle complex 3D mixed-element meshes and finally the application of the method to the transport of a scalar level set. © 2012 Global-Science Press.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

On-site tracking in open construction sites is often difficult because of the large amounts of items that are present and need to be tracked. Additionally, the amounts of occlusions/obstructions present create a highly complex tracking environment. Existing tracking methods are based mainly on Radio Frequency technologies, including Global Positioning Systems (GPS), Radio Frequency Identification (RFID), Bluetooth and Wireless Fidelity (Wi-Fi, Ultra-Wideband, etc). These methods require considerable amounts of pre-processing time since they need to manually deploy tags and keep record of the items they are placed on. In construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. This paper presents a novel method for open site tracking with construction cameras based on machine vision. According to this method, video feed is collected from on site video cameras, and the user selects the entity he wishes to track. The entity is tracked in each video using 2D vision tracking. Epipolar geometry is then used to calculate the depth of the marked area to provide the 3D location of the entity. This method addresses the limitations of radio frequency methods by being unobtrusive and using inexpensive, and easy to deploy equipment. The method has been implemented in a C++ prototype and preliminary results indicate its effectiveness