955 resultados para Image-based cytometry


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Field Programmable Gate Array (FPGA) implementation of the commonly used Histogram of Oriented Gradients (HOG) algorithm is explored. The HOG algorithm is employed to extract features for object detection. A key focus has been to explore the use of a new FPGA-based processor which has been targeted at image processing. The paper gives details of the mapping and scheduling factors that influence the performance and the stages that were undertaken to allow the algorithm to be deployed on FPGA hardware, whilst taking into account the specific IPPro architecture features. We show that multi-core IPPro performance can exceed that of against state-of-the-art FPGA designs by up to 3.2 times with reduced design and implementation effort and increased flexibility all on a low cost, Zynq programmable system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital pathology and the adoption of image analysis have grown rapidly in the last few years. This is largely due to the implementation of whole slide scanning, advances in software and computer processing capacity and the increasing importance of tissue-based research for biomarker discovery and stratified medicine. This review sets out the key application areas for digital pathology and image analysis, with a particular focus on research and biomarker discovery. A variety of image analysis applications are reviewed including nuclear morphometry and tissue architecture analysis, but with emphasis on immunohistochemistry and fluorescence analysis of tissue biomarkers. Digital pathology and image analysis have important roles across the drug/companion diagnostic development pipeline including biobanking, molecular pathology, tissue microarray analysis, molecular profiling of tissue and these important developments are reviewed. Underpinning all of these important developments is the need for high quality tissue samples and the impact of pre-analytical variables on tissue research is discussed. This requirement is combined with practical advice on setting up and running a digital pathology laboratory. Finally, we discuss the need to integrate digital image analysis data with epidemiological, clinical and genomic data in order to fully understand the relationship between genotype and phenotype and to drive discovery and the delivery of personalized medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel methodology has been developed to quantify important saltwater intrusion parameters in a sandbox style experiment using image analysis. Existing methods found in the literature are based mainly on visual observations, which are subjective, labour intensive and limits the temporal and spatial resolutions that can be analysed. A robust error analysis was undertaken to determine the optimum methodology to convert image light intensity to concentration. Results showed that defining a relationship on a pixel-wise basis provided the most accurate image to concentration conversion and allowed quantification of the width of mixing zone between the saltwater and freshwater. A large image sample rate was used to investigate the transient dynamics of saltwater intrusion, which rendered analysis by visual observation unsuitable. This paper presents the methodologies developed to minimise human input and promote autonomy, provide high resolution image to concentration conversion and allow the quantification of intrusion parameters under transient conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimizing manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar
traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the success of patch-based approaches to image denoising,this paper addresses the ill-posed problem of patch size selection.Large patch sizes improve noise robustness in the presence of good matches, but can also lead to artefacts in textured regions due to the rare patch effect; smaller patch sizes reconstruct details more accurately but risk over-fitting to the noise in uniform regions. We propose to jointly optimize each matching patch’s identity and size for gray scale image denoising, and present several implementations.The new approach effectively selects the largest matching areas, subject to the constraints of the available data and noise level, to improve noise robustness. Experiments on standard test images demonstrate our approach’s ability to improve on fixed-size reconstruction, particularly at high noise levels, on smoother image regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a novel and effective lip-based biometric identification approach with the Discrete Hidden Markov Model Kernel (DHMMK) is developed. Lips are described by shape features (both geometrical and sequential) on two different grid layouts: rectangular and polar. These features are then specifically modeled by a DHMMK, and learnt by a support vector machine classifier. Our experiments are carried out in a ten-fold cross validation fashion on three different datasets, GPDS-ULPGC Face Dataset, PIE Face Dataset and RaFD Face Dataset. Results show that our approach has achieved an average classification accuracy of 99.8%, 97.13%, and 98.10%, using only two training images per class, on these three datasets, respectively. Our comparative studies further show that the DHMMK achieved a 53% improvement against the baseline HMM approach. The comparative ROC curves also confirm the efficacy of the proposed lip contour based biometrics learned by DHMMK. We also show that the performance of linear and RBF SVM is comparable under the frame work of DHMMK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigated using lip movements as a behavioural biometric for person authentication. The system was trained, evaluated and tested using the XM2VTS dataset, following the Lausanne Protocol configuration II. Features were selected from the DCT coefficients of the greyscale lip image. This paper investigated the number of DCT coefficients selected, the selection process, and static and dynamic feature combinations. Using a Gaussian Mixture Model - Universal Background Model framework an Equal Error Rate of 2.20% was achieved during evaluation and on an unseen test set a False Acceptance Rate of 1.7% and False Rejection Rate of 3.0% was achieved. This compares favourably with face authentication results on the same dataset whilst not being susceptible to spoofing attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies have been carried out to recognize individuals from a frontal view using their gait patterns. In previous work, gait sequences were captured using either single or stereo RGB camera systems or the Kinect 1.0 camera system. In this research, we used a new frontal view gait recognition method using a laser based Time of Flight (ToF) camera. In addition to the new gait data set, other contributions include enhancement of the silhouette segmentation, gait cycle estimation and gait image representations. We propose four new gait image representations namely Gait Depth Energy Image (GDE), Partial GDE (PGDE), Discrete Cosine Transform GDE (DGDE) and Partial DGDE (PDGDE). The experimental results show that all the proposed gait image representations produce better accuracy than the previous methods. In addition, we have also developed Fusion GDEs (FGDEs) which achieve better overall accuracy and outperform the previous methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new wrapper feature selection algorithm for human detection. This algorithm is a hybrid featureselection approach combining the benefits of filter and wrapper methods. It allows the selection of an optimalfeature vector that well represents the shapes of the subjects in the images. In detail, the proposed featureselection algorithm adopts the k-fold subsampling and sequential backward elimination approach, while thestandard linear support vector machine (SVM) is used as the classifier for human detection. We apply theproposed algorithm to the publicly accessible INRIA and ETH pedestrian full image datasets with the PASCALVOC evaluation criteria. Compared to other state of the arts algorithms, our feature selection based approachcan improve the detection speed of the SVM classifier by over 50% with up to 2% better detection accuracy.Our algorithm also outperforms the equivalent systems introduced in the deformable part model approach witharound 9% improvement in the detection accuracy

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O presente trabalho situa a investigação em torno do marketing, particularmente do branding, territorial numa perspectiva holística e consubstanciadora de comportamentos, identidade e desenvolvimento territorial. Nesse âmbito, focaliza-se na problemática da amplitude e heterogeneidade de actores com capacidade de impacte na construção e transmissão da marca territorial e na necessidade da sua contemplação nos pressupostos de branding para a sustentação efectiva das marcas territoriais. A tese defendida advoga a relevância de empreender marcas territoriais assentes na colaboração e integração dos stakeholders no processo construtivo, de forma a potenciar a relação directa entre o branding, a identidade e comportamento territorial e aumentar o output da marca. Nesse sentido, essa orientação é consubstanciada sob a edificação conceptual de Stakeholders Based Branding e procede-se à exploração e aferição de contributos para o seu desenvolvimento e modelização. Empiricamente e tendo por base uma abordagem descritiva e exploratória, a investigação orienta-se a um trabalho de natureza qualitativa e interpretativa que estuda, neste âmbito e através da metodologia de Grounded Theory, 6 casos de estudo de municípios portugueses, através de 48 entrevistas em profundidade realizadas a líderes políticos e stakeholders territoriais e dados secundários. Os resultados obtidos em campo demonstram a relação entre a integração de stakeholders e o sentimento de branding e imagem territorial, reiterando que quanto mais envolvidos os stakeholders se sentem no processo construtivo da marca territorial, mais tendem a assumir a sua auto-imputação e que os territórios com posturas mais colaborativas na construção de branding são os que tendem a possuir auto-imagens e imagens públicas mais positivas. Paralelamente permitem aferir um conjunto de factores impulsores, implementados e/ou idealizados, tidos como relevantes para promover uma orientação de Stakeholders Based Branding, nos respectivos territórios. Do percurso investigativo emana um constructo de Stakeholders Based Branding, com carácter indutivo, respeitando os pressupostos da Grounded Theory e assente na modelização e constituição de proposições teóricas que visam contribuir para orientar a construção de marcas territoriais alicerçadas na integração e colaboração de stakeholders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho focou-se no estudo de técnicas de sub-espaço tendo em vista as aplicações seguintes: eliminação de ruído em séries temporais e extracção de características para problemas de classificação supervisionada. Foram estudadas as vertentes lineares e não-lineares das referidas técnicas tendo como ponto de partida os algoritmos SSA e KPCA. No trabalho apresentam-se propostas para optimizar os algoritmos, bem como uma descrição dos mesmos numa abordagem diferente daquela que é feita na literatura. Em qualquer das vertentes, linear ou não-linear, os métodos são apresentados utilizando uma formulação algébrica consistente. O modelo de subespaço é obtido calculando a decomposição em valores e vectores próprios das matrizes de kernel ou de correlação/covariância calculadas com um conjunto de dados multidimensional. A complexidade das técnicas não lineares de subespaço é discutida, nomeadamente, o problema da pre-imagem e a decomposição em valores e vectores próprios de matrizes de dimensão elevada. Diferentes algoritmos de préimagem são apresentados bem como propostas alternativas para a sua optimização. A decomposição em vectores próprios da matriz de kernel baseada em aproximações low-rank da matriz conduz a um algoritmo mais eficiente- o Greedy KPCA. Os algoritmos são aplicados a sinais artificiais de modo a estudar a influência dos vários parâmetros na sua performance. Para além disso, a exploração destas técnicas é extendida à eliminação de artefactos em séries temporais biomédicas univariáveis, nomeadamente, sinais EEG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho descreve o desenvolvimento e aplicação de sistemas baseados em detetores gasosos microestruturados, para imagiologia de fluorescência de raios-X por dispersão em energia (EDXRF). A técnica de imagiologia por fluorescência de raios-X assume-se como uma técnica poderosa, não-destrutiva, em análises da distribuição espacial de elementos em materiais. Os sistemas para imagiologia de EDXRF desenvolvidos são constituídos por: um tubo de raios-X, usado para excitar os elementos da amostra; um detetor gasoso microestruturado; e uma lente pinhole que foca a radiação de fluorescência no plano do detetor formando assim a imagem e permitindo a sua ampliação. Por outro lado é estudada a influência do diâmetro da abertura do pinhole bem como do fator de ampliação obtido para a imagem, na resolução em posição do sistema. Foram usados dois conceitos diferentes de detetores gasosos microestruturados. O primeiro, baseado na microestrutura designada por 2D-Micro-Hole & Strip Plate (2D-MHSP) com uma área ativa de 3 3 cm2, enquanto que o segundo, baseado na estrutura 2D-Thick-COBRA (2D-THCOBRA) apresenta uma área ativa de deteção de 10 10 cm2. Estes detetores de raios-X de baixo custo têm a particularidade de funcionar em regime de fotão único permitindo a determinação da energia e posição de interação de cada fotão que chega ao detetor. Deste modo permitem detetar a energia dos fotões X de fluorescência, bem como obter imagens 2D da distribuição desses fotões X para o intervalo de energias desejado. São por isso adequados a aplicações de imagiologia de EDXRF. Os detetores desenvolvidos mostraram resoluções em energia de 17% e 22% para fotões incidentes com uma energia de 5.9 keV, respectivamente para o detetor 2D-MHSP e 2D-THCOBRA e resoluções em posição adequadas para um vasto número de aplicações. Ao longo deste trabalho é detalhado o desenvolvimento, o estudo das características e do desempenho de cada um dos detetores, e sua influência na performance final de cada sistema proposto. Numa fase mais avançada apresentam-se os resultados correspondentes à aplicação dos dois sistemas a diversas amostras, incluindo algumas do nosso património cultural e também uma amostra biológica.