901 resultados para Subfractals, Subfractal Coding, Model Analysis, Digital Imaging, Pattern Recognition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given the widespread use of computers, the visual pattern recognition task has been automated in order to address the huge amount of available digital images. Many applications use image processing techniques as well as feature extraction and visual pattern recognition algorithms in order to identify people, to make the disease diagnosis process easier, to classify objects, etc. based on digital images. Among the features that can be extracted and analyzed from images is the shape of objects or regions. In some cases, shape is the unique feature that can be extracted with a relatively high accuracy from the image. In this work we present some of most important shape analysis methods and compare their performance when applied on three well-known shape image databases. Finally, we propose the development of a new shape descriptor based on the Hough Transform.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was designed to present the feasibility of an in vivo image-guided percutaneous cryoablation of the porcine vertebral body. Methods The institutional animal care committee approved this study. Cone-beam computed tomography (CBCT)-guided vertebral cryoablations (n = 22) were performed in eight pigs with short, 2-min, single or double-freezing protocols. Protective measures to nerves included dioxide carbon (CO2) epidural injections and spinal canal temperature monitoring. Clinical, radiological, and pathological data with light (n = 20) or transmission electron (n = 2) microscopic analyses were evaluated after 6 days of clinical follow-up and euthanasia. Results CBCT/fluoroscopic-guided transpedicular vertebral body cryoprobe positioning and CO2 epidural injection were successful in all procedures. No major complications were observed in seven animals (87.5 %, n = 8). A minor complication was observed in one pig (12.5 %, n = 1). Logistic regression model analysis showed the cryoprobe-spinal canal (Cp-Sc) distance as the most efficient parameter to categorize spinal canal temperatures lower than 19 °C (p<0.004), with a significant Pearson’s correlation test (p < 0.041) between the Cp-Sc distance and the lowest spinal canal temperatures. Ablation zones encompassed pedicles and the posterior wall of the vertebral bodies with an inflammatory rim, although no inflammatory infiltrate was depicted in the surrounding neural structures at light microscopy. Ultrastructural analyses evidenced myelin sheath disruption in some large nerve fibers, although neurological deficits were not observed. Conclusions CBCT-guided vertebral cryoablation of the porcine spine is feasible under a combination of a short freezing protocol and protective measures to the surrounding nerves. Ultrastructural analyses may be helpful assess the early modifications of the nerve fibers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Color texture classification is an important step in image segmentation and recognition. The color information is especially important in textures of natural scenes, such as leaves surfaces, terrains models, etc. In this paper, we propose a novel approach based on the fractal dimension for color texture analysis. The proposed approach investigates the complexity in R, G and B color channels to characterize a texture sample. We also propose to study all channels in combination, taking into consideration the correlations between them. Both these approaches use the volumetric version of the Bouligand-Minkowski Fractal Dimension method. The results show a advantage of the proposed method over other color texture analysis methods. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods from statistical physics, such as those involving complex networks, have been increasingly used in the quantitative analysis of linguistic phenomena. In this paper, we represented pieces of text with different levels of simplification in co-occurrence networks and found that topological regularity correlated negatively with textual complexity. Furthermore, in less complex texts the distance between concepts, represented as nodes, tended to decrease. The complex networks metrics were treated with multivariate pattern recognition techniques, which allowed us to distinguish between original texts and their simplified versions. For each original text, two simplified versions were generated manually with increasing number of simplification operations. As expected, distinction was easier for the strongly simplified versions, where the most relevant metrics were node strength, shortest paths and diversity. Also, the discrimination of complex texts was improved with higher hierarchical network metrics, thus pointing to the usefulness of considering wider contexts around the concepts. Though the accuracy rate in the distinction was not as high as in methods using deep linguistic knowledge, the complex network approach is still useful for a rapid screening of texts whenever assessing complexity is essential to guarantee accessibility to readers with limited reading ability. Copyright (c) EPLA, 2012

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intracellular pattern recognition receptors such as the nucleotide-binding oligomerization domain (NOD)-like receptors family members are key for innate immune recognition of microbial infection and may play important roles in the development of inflammatory diseases, including rheumatic diseases. In this study, we evaluated the role of NOD1 and NOD2 on development of experimental arthritis. Ag-induced arthritis was generated in wild-type, NOD1(-/-)!, NOD2(-/-), or receptor-interacting serine-threonine kinase 2(-/-) (RIPK2(-/-)) immunized mice challenged intra-articularly with methylated BSA. Nociception was determined by electronic Von Frey test. Neutrophil recruitment and histopathological analysis of proteoglycan lost was evaluated in inflamed joints. Joint levels of inflammatory cytokine/chemokine were measured by ELISA. Cytokine (IL-6 and IL-23) and NOD2 expressions were determined in mice synovial tissue by RT-PCR. The NOD2(-/-) and RIPK2(-/-), but not NOD1(-/-), mice are protected from Ag-induced arthritis, which was characterized by a reduction in neutrophil recruitment, nociception, and cartilage degradation. NOD2/RIPK2 signaling impairment was associated with a reduction in proinflammatory cytokines and chemokines (TNF, IL-1 beta, and CXCL1/KC). IL-17 and IL-17 triggering cytokines (IL-6 and IL-23) were also reduced in the joint, but there is no difference in the percentage of CD4(+) IL-17(+) cells in the lymph node between arthritic wild-type and NOD2(-/-) mice. Altogether, these findings point to a pivotal role of the NOD2/RIPK2 signaling in the onset of experimental arthritis by triggering an IL-17-dependent joint immune response. Therefore, we could propose that NOD2 signaling is a target for the development of new therapies for the control of rheumatoid arthritis. The Journal of Immunology, 2012, 188: 5116-5122.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Transcript enumeration methods such as SAGE, MPSS, and sequencing-by-synthesis EST "digital northern", are important high-throughput techniques for digital gene expression measurement. As other counting or voting processes, these measurements constitute compositional data exhibiting properties particular to the simplex space where the summation of the components is constrained. These properties are not present on regular Euclidean spaces, on which hybridization-based microarray data is often modeled. Therefore, pattern recognition methods commonly used for microarray data analysis may be non-informative for the data generated by transcript enumeration techniques since they ignore certain fundamental properties of this space. Results Here we present a software tool, Simcluster, designed to perform clustering analysis for data on the simplex space. We present Simcluster as a stand-alone command-line C package and as a user-friendly on-line tool. Both versions are available at: http://xerad.systemsbiology.net/simcluster. Conclusion Simcluster is designed in accordance with a well-established mathematical framework for compositional data analysis, which provides principled procedures for dealing with the simplex space, and is thus applicable in a number of contexts, including enumeration-based gene expression data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes a novel texture descriptor based on fractal theory. The method is based on the Bouligand- Minkowski descriptors. We decompose the original image recursively into four equal parts. In each recursion step, we estimate the average and the deviation of the Bouligand-Minkowski descriptors computed over each part. Thus, we extract entropy features from both average and deviation. The proposed descriptors are provided by concatenating such measures. The method is tested in a classification experiment under well known datasets, that is, Brodatz and Vistex. The results demonstrate that the novel technique achieves better results than classical and state-of-the-art texture descriptors, such as Local Binary Patterns, Gabor-wavelets and co-occurrence matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper,we present a novel texture analysis method based on deterministic partially self-avoiding walks and fractal dimension theory. After finding the attractors of the image (set of pixels) using deterministic partially self-avoiding walks, they are dilated in direction to the whole image by adding pixels according to their relevance. The relevance of each pixel is calculated as the shortest path between the pixel and the pixels that belongs to the attractors. The proposed texture analysis method is demonstrated to outperform popular and state-of-the-art methods (e.g. Fourier descriptors, occurrence matrix, Gabor filter and local binary patterns) as well as deterministic tourist walk method and recent fractal methods using well-known texture image datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]Different researches suggest that inner facial features are not the only discriminative features for tasks such as person identification or gender classification. Indeed, they have shown an influence of features which are part of the local face context, such as hair, on these tasks. However, object-centered approaches which ignore local context dominate the research in computational vision based facial analysis. In this paper, we performed an analysis to study which areas and which resolutions are diagnostic for the gender classification problem. We first demonstrate the importance of contextual features in human observers for gender classification using a psychophysical ”bubbles” technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD thesis discusses the rationale for design and use of synthetic oligosaccharides for the development of glycoconjugate vaccines and the role of physicochemical methods in the characterization of these vaccines. The study concerns two infectious diseases that represent a serious problem for the national healthcare programs: human immunodeficiency virus (HIV) and Group A Streptococcus (GAS) infections. Both pathogens possess distinctive carbohydrate structures that have been described as suitable targets for the vaccine design. The Group A Streptococcus cell membrane polysaccharide (GAS-PS) is an attractive vaccine antigen candidate based on its conserved, constant expression pattern and the ability to confer immunoprotection in a relevant mouse model. Analysis of the immunogenic response within at-risk populations suggests an inverse correlation between high anti-GAS-PS antibody titres and GAS infection cases. Recent studies show that a chemically synthesized core polysaccharide-based antigen may represent an antigenic structural determinant of the large polysaccharide. Based on GAS-PS structural analysis, the study evaluates the potential to exploit a synthetic design approach to GAS vaccine development and compares the efficiency of synthetic antigens with the long isolated GAS polysaccharide. Synthetic GAS-PS structural analogues were specifically designed and generated to explore the impact of antigen length and terminal residue composition. For the HIV-1 glycoantigens, the dense glycan shield on the surface of the envelope protein gp120 was chosen as a target. This shield masks conserved protein epitopes and facilitates virus spread via binding to glycan receptors on susceptible host cells. The broadly neutralizing monoclonal antibody 2G12 binds a cluster of high-mannose oligosaccharides on the gp120 subunit of HIV-1 Env protein. This oligomannose epitope has been a subject to the synthetic vaccine development. The cluster nature of the 2G12 epitope suggested that multivalent antigen presentation was important to develop a carbohydrate based vaccine candidate. I describe the development of neoglycoconjugates displaying clustered HIV-1 related oligomannose carbohydrates and their immunogenic properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the work is to conduct a finite element model analysis on a small – size concrete beam and on a full size concrete beam internally reinforced with BFRP exposed at elevated temperatures. Experimental tests performed at Kingston University have been used to compare the results from the numerical analysis for the small – size concrete beam. Once the behavior of the small – size beam at room temperature is investigated and switching to the heating phase reinforced beams are tested at 100°C, 200°C and 300°C in loaded condition. The aim of the finite element analysis is to reflect the three – point bending test adopted into the oven during the exposure of the beam at room temperature and at elevated temperatures. Performance and deformability of reinforced beams are straightly correlated to the material properties and a wide analysis on elastic modulus and coefficient of thermal expansion is given in this work. Develop a good correlation between the numerical model and the experimental test is the main objective of the analysis on the small – size concrete beam, for both modelling the aim is also to estimate which is the deterioration of the material properties due to the heating process and the influence of different parameters on the final result. The focus of the full – size modelling which involved the last part of this work is to evaluate the effect of elevated temperatures, the material deterioration and the deflection trend on a reinforced beam characterized by a different size. A comparison between the results from different modelling has been developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurden zwei verschiedene Systeme untersucht, deren verbindende Gemeinsamkeit in den verwendeten ortsauflösenden, spektroskopischen Messmethoden der Oberflächenanalytik, wie z.B. abbildendes XPS, Röntgennahkanten-Photoemissionsmikroskopie (XANES-PEEM) und Augerspektroskopie (AES) liegt. Im ersten Teil der Arbeit wurden Diamant-Nukleationsdomänen auf Ir/SrTiO3 untersucht und mit vorherrschenden Modellen aus der Literatur verglichen. Die Nukleationsdomänen, wie sie im Mikrowellen-induzierten CVD Prozess unter Verwendung der BEN Prozedur (bias-enhanced nucleation) entstehen, bilden die „Startschicht“ für ein heteroepitaktisches Wachstum einer hoch orientierten Diamantschicht. Sie entwickeln sich aber unter Bedingungen, unter denen 3D-Diamant abgetragen und weggeätzt wird. Mittels XANES-PEEM Messungen konnte erstmals die lokale Bindungsumgebung des Kohlenstoffs in den Nukleationsdomänen ortsaufgelöst aufgezeigt werden und aus AES Messungen ließ sich die Schichtdicke der Nukleationsdomänen abschätzen. Es zeigte sich, dass die Nukleationsdomänen Bereiche mit etwa 1 nm Dicke darstellen, in denen der Übergang von eine sp2-koordinierte amorphen Kohlenstoff- zu einer Diamantschicht mit hohem sp3 Anteil abläuft. Zur Erklärung des Nukleationsprozesses wurde auf das „Clustermodell“ von Lifshitz et al. zurückgegriffen, welches um einen wesentlichen Aspekt erweitert wurde. Die Stabilität der Nukleationsdomänen gegen die Ätzwirkung des Nukleationsprozesses auf Volumendiamant wird durch eine starke Wechselwirkung zwischen dem Diamant und dem Iridiumsubstrat erklärt, wobei die Dicke von etwa 1 nm als Maß für die Ausdehnung dieses Wechselwirkungsbereichs angesehen wird. Der zweite Teil der Arbeit beschäftigt sich mit der Charakterisierung präsolarer SiC-Körner und darin eingeschlossener Spurenelemente. Neben den Hauptelementen Si und C wurden auch Spinelle wie Chromit (FeCr2O4), Korund (Al2O3) und auch verschiedene Spurenelemente (z. B. Al, Ba und Y) nachgewiesen. Ferner wurden XPS-Linien bei Energien nachgewiesen, welche sich den Seltenen Erden Erbium, Thulium und Dysprosium zuordnen lassen. Aufgrund von Abweichungen zur Literatur bzgl. der ausgeprägten Intensität der XPS-Linien, wurde als alternative Erklärungsmöglichkeit für verschiedene Signale der Nachweis von stark schwefelhaltigen Körnern (z.B. so genannte „Fremdlinge“) mit Aufladungen von mehreren Volt diskutiert. Es zeigt sich, dass abbildendes XPS und XANES-PEEM Methoden zur leistungsfähigen chemischen Charakterisierung von SiC-Körnern und anderer solarer und präsolarer Materie im Größenbereich bis herab zu 100 – 200 nm Durchmesser (z.B. als Grundlage für eine spätere massenspektrometrische Isotopenanalyse)darstellen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.