787 resultados para cliche recognition
Resumo:
Introduction and aims of the research Nitric oxide (NO) and endocannabinoids (eCBs) are major retrograde messengers, involved in synaptic plasticity (long-term potentiation, LTP, and long-term depression, LTD) in many brain areas (including hippocampus and neocortex), as well as in learning and memory processes. NO is synthesized by NO synthase (NOS) in response to increased cytosolic Ca2+ and mainly exerts its functions through soluble guanylate cyclase (sGC) and cGMP production. The main target of cGMP is the cGMP-dependent protein kinase (PKG). Activity-dependent release of eCBs in the CNS leads to the activation of the Gαi/o-coupled cannabinoid receptor 1 (CB1) at both glutamatergic and inhibitory synapses. The perirhinal cortex (Prh) is a multimodal associative cortex of the temporal lobe, critically involved in visual recognition memory. LTD is proposed to be the cellular correlate underlying this form of memory. Cholinergic neurotransmission has been shown to play a critical role in both visual recognition memory and LTD in Prh. Moreover, visual recognition memory is one of the main cognitive functions impaired in the early stages of Alzheimer’s disease. The main aim of my research was to investigate the role of NO and ECBs in synaptic plasticity in rat Prh and in visual recognition memory. Part of this research was dedicated to the study of synaptic transmission and plasticity in a murine model (Tg2576) of Alzheimer’s disease. Methods Field potential recordings. Extracellular field potential recordings were carried out in horizontal Prh slices from Sprague-Dawley or Dark Agouti juvenile (p21-35) rats. LTD was induced with a single train of 3000 pulses delivered at 5 Hz (10 min), or via bath application of carbachol (Cch; 50 μM) for 10 min. LTP was induced by theta-burst stimulation (TBS). In addition, input/output curves and 5Hz-LTD were carried out in Prh slices from 3 month-old Tg2576 mice and littermate controls. Behavioural experiments. The spontaneous novel object exploration task was performed in intra-Prh bilaterally cannulated adult Dark Agouti rats. Drugs or vehicle (saline) were directly infused into the Prh 15 min before training to verify the role of nNOS and CB1 in visual recognition memory acquisition. Object recognition memory was tested at 20 min and 24h after the end of the training phase. Results Electrophysiological experiments in Prh slices from juvenile rats showed that 5Hz-LTD is due to the activation of the NOS/sGC/PKG pathway, whereas Cch-LTD relies on NOS/sGC but not PKG activation. By contrast, NO does not appear to be involved in LTP in this preparation. Furthermore, I found that eCBs are involved in LTP induction, but not in basal synaptic transmission, 5Hz-LTD and Cch-LTD. Behavioural experiments demonstrated that the blockade of nNOS impairs rat visual recognition memory tested at 24 hours, but not at 20 min; however, the blockade of CB1 did not affect visual recognition memory acquisition tested at both time points specified. In three month-old Tg2576 mice, deficits in basal synaptic transmission and 5Hz-LTD were observed compared to littermate controls. Conclusions The results obtained in Prh slices from juvenile rats indicate that NO and CB1 play a role in the induction of LTD and LTP, respectively. These results are confirmed by the observation that nNOS, but not CB1, is involved in visual recognition memory acquisition. The preliminary results obtained in the murine model of Alzheimer’s disease indicate that deficits in synaptic transmission and plasticity occur very early in Prh; further investigations are required to characterize the molecular mechanisms underlying these deficits.
Resumo:
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.
Resumo:
Lo studio dell’intelligenza artificiale si pone come obiettivo la risoluzione di una classe di problemi che richiedono processi cognitivi difficilmente codificabili in un algoritmo per essere risolti. Il riconoscimento visivo di forme e figure, l’interpretazione di suoni, i giochi a conoscenza incompleta, fanno capo alla capacità umana di interpretare input parziali come se fossero completi, e di agire di conseguenza. Nel primo capitolo della presente tesi sarà costruito un semplice formalismo matematico per descrivere l’atto di compiere scelte. Il processo di “apprendimento” verrà descritto in termini della massimizzazione di una funzione di prestazione su di uno spazio di parametri per un ansatz di una funzione da uno spazio vettoriale ad un insieme finito e discreto di scelte, tramite un set di addestramento che descrive degli esempi di scelte corrette da riprodurre. Saranno analizzate, alla luce di questo formalismo, alcune delle più diffuse tecniche di artificial intelligence, e saranno evidenziate alcune problematiche derivanti dall’uso di queste tecniche. Nel secondo capitolo lo stesso formalismo verrà applicato ad una ridefinizione meno intuitiva ma più funzionale di funzione di prestazione che permetterà, per un ansatz lineare, la formulazione esplicita di un set di equazioni nelle componenti del vettore nello spazio dei parametri che individua il massimo assoluto della funzione di prestazione. La soluzione di questo set di equazioni sarà trattata grazie al teorema delle contrazioni. Una naturale generalizzazione polinomiale verrà inoltre mostrata. Nel terzo capitolo verranno studiati più nel dettaglio alcuni esempi a cui quanto ricavato nel secondo capitolo può essere applicato. Verrà introdotto il concetto di grado intrinseco di un problema. Verranno inoltre discusse alcuni accorgimenti prestazionali, quali l’eliminazione degli zeri, la precomputazione analitica, il fingerprinting e il riordino delle componenti per lo sviluppo parziale di prodotti scalari ad alta dimensionalità. Verranno infine introdotti i problemi a scelta unica, ossia quella classe di problemi per cui è possibile disporre di un set di addestramento solo per una scelta. Nel quarto capitolo verrà discusso più in dettaglio un esempio di applicazione nel campo della diagnostica medica per immagini, in particolare verrà trattato il problema della computer aided detection per il rilevamento di microcalcificazioni nelle mammografie.
Resumo:
Antibody microarrays are of great research interest because of their potential application as biosensors for high-throughput protein and pathogen screening technologies. In this active area, there is still a need for novel structures and assemblies providing insight in binding interactions such as spherical and annulus-shaped protein structures, e.g. for the utilization of curved surfaces for the enhanced protein-protein interactions and detection of antigens. Therefore, the goal of the presented work was to establish a new technique for the label-free detection of bio-molecules and bacteria on topographically structured surfaces, suitable for antibody binding.rnIn the first part of the presented thesis, the fabrication of monolayers of inverse opals with 10 μm diameter and the immobilization of antibodies on their interior surface is described. For this purpose, several established methods for the linking of antibodies to glass, including Schiff bases, EDC/S-NHS chemistry and the biotin-streptavidin affinity system, were tested. The employed methods included immunofluorescence and image analysis by phase contrast microscopy. It could be shown that these methods were not successful in terms of antibody immobilization and adjacent bacteria binding. Hence, a method based on the application of an active-ester-silane was introduced. It showed promising results but also the need for further analysis. Especially the search for alternative antibodies addressing other antigens on the exterior of bacteria will be sought-after in the future.rnAs a consequence of the ability to control antibody-functionalized surfaces, a new technique employing colloidal templating to yield large scale (~cm2) 2D arrays of antibodies against E. coli K12, eGFP and human integrin αvβ3 on a versatile useful glass surface is presented. The antibodies were swept to reside around the templating microspheres during solution drying, and physisorbed on the glass. After removing the microspheres, the formation of annuli-shaped antibody structures was observed. The preserved antibody structure and functionality is shown by binding the specific antigens and secondary antibodies. The improved detection of specific bacteria from a crude solution compared to conventional “flat” antibody surfaces and the setting up of an integrin-binding platform for targeted recognition and surface interactions of eukaryotic cells is demonstrated. The structures were investigated by atomic force, confocal and fluorescence microscopy. Operational parameters like drying time, temperature, humidity and surfactants were optimized to obtain a stable antibody structure.
Resumo:
Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries.
Resumo:
The study of the bio-recognition phenomena behind a biological process is nowadays considered a useful tool to deeply understand physiological mechanisms allowing the discovery of novel biological target and the development of new lead candidates. Moreover, understanding this kind of phenomena can be helpful in characterizing absorption, distribution, metabolism, elimination and toxicity properties of a new drug (ADMET parameters). Recent estimations show that about half of all drugs in development fail to make it to the market because of ADMET deficiencies; thus a rapid determination of ADMET parameters in early stages of drug discovery would save money and time, allowing to choose the better compound and to eliminate any losers. The monitoring of drug binding to plasma proteins is becoming essential in the field of drug discovery to characterize the drug distribution in human body. Human serum albumin (HSA) is the most abundant protein in plasma playing a fundamental role in the transport of drugs, metabolites and endogenous factors; so the study of the binding mechanism to HSA has become crucial to the early characterization of the pharmacokinetic profile of new potential leads. Furthermore, most of the distribution experiments carried out in vivo are performed on animals. Hence it is interesting to determine the binding of new compounds to albumins from different species to evaluate the reliability of extrapolating the distribution data obtained in animals to humans. It is clear how the characterization of interactions between proteins and drugs determines a growing need of methodologies to study any specific molecular event. A wide variety of biochemical techniques have been applied to this purpose. High-performance liquid affinity chromatography, circular dichroism and optical biosensor represent three techniques that can be able to elucidate the interaction of a new drug with its target and with others proteins that could interfere with ADMET parameters.
Resumo:
Il progetto Eye-Trauma si colloca all'interno dello sviluppo di un simulatore chirurgico per traumi alla zona oculare, sviluppato in collaborazione con Simulation Group in Boston, Harvard Medical School e Massachusetts General Hospital. Il simulatore presenta un busto in silicone fornito di moduli intercambiabili della zona oculare, per simulare diversi tipi di trauma. L'utilizzatore è chiamato ad eseguire la procedura medica di saturazione tramite degli strumenti chirurgici su cui sono installati dei sensori di forza e di apertura. I dati collezionati vengono utilizzati all'interno del software per il riconoscimento dei gesti e il controllo real-time della performance. L'algoritmo di gesture recognition, da me sviluppato, si basa sul concetto di macchine a stati; la transizione tra gli stati avviene in base agli eventi rilevati dal simulatore.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
One to three percent of patients exposed to intravenously injected iodinated contrast media (CM) develop delayed hypersensitivity reactions. Positive patch test reactions, immunohistological findings, and CM-specific proliferation of T cells in vitro suggest a pathogenetic role for T cells. We have previously demonstrated that CM-specific T cell clones (TCCs) show a broad range of cross-reactivity to different CM. However, the mechanism of specific CM recognition by T cell receptors (TCRs) has not been analysed so far.
Resumo:
There is conflicting evidence whether Parkinson's disease (PD) is associated with impaired recognition memory and which of its underlying processes, namely recollection and familiarity, is more affected by the disease. The present study explored the contribution of recollection and familiarity to verbal recognition memory performance in 14 nondemented PD patients and a healthy control group with two different methods: (i) the word-frequency mirror effect, and (ii) Remember/Know judgments. Overall, recognition memory of patients was intact. The word-frequency mirror effect was observed both in patients and controls: Hit rates were higher and false alarm rates were lower for low-frequency compared to high-frequency words. However, Remember/Know judgments indicated normal recollection, but impaired familiarity. Our findings suggest that mild to moderate PD patients are selectively impaired at familiarity whereas recollection and overall recognition memory are intact.
Resumo:
During the last 10 years several molecular markers have been established as useful tools among the armamentarium of a hematologist. As a consequence, the number of performed hematologic molecular analyses has immensely increased. Often, such tests replace or complement other laboratory methods. Molecular markers can be useful in many ways: they can serve for diagnostics, describe the prognostic profile, predict which types of drugs are indicated, and can be used for the therapeutic monitoring of the patient to indicate an adequate response or predict resistance or relapse of the disease. Many markers fulfill more than one of these aspects. Most important, however, is the right choice of analyses at the right time-points!
Resumo:
Supramolecular two-dimensional engineering epitomizes the design of complex molecular architectures through recognition events in multicomponent self-assembly. Despite being the subject of in-depth experimental studies, such articulated phenomena have not been yet elucidated in time and space with atomic precision. Here we use atomistic molecular dynamics to simulate the recognition of complementary hydrogen-bonding modules forming 2D porous networks on graphite. We describe the transition path from the melt to the crystalline hexagonal phase and show that self-assembly proceeds through a series of intermediate states featuring a plethora of polygonal types. Finally, we design a novel bicomponent system possessing kinetically improved self-healing ability in silico, thus demonstrating that a priori engineering of 2D self-assembly is possible.
Resumo:
Mapping and ablation of atrial tachycardias (ATs) secondary to catheter ablation of atrial fibrillation (AF) is often challenging due to the complex atrial substrate, different AT mechanisms, and potential origin not only in the left atrium (LA) but also from the right atrium (RA) and the adjacent thoracic veins.
Resumo:
Chemicals can elicit T-cell-mediated diseases such as allergic contact dermatitis and adverse drug reactions. Therefore, testing of chemicals, drugs and protein allergens for hazard identification and risk assessment is essential in regulatory toxicology. The seventh amendment of the EU Cosmetics Directive now prohibits the testing of cosmetic ingredients in mice, guinea pigs and other animal species to assess their sensitizing potential. In addition, the EU Chemicals Directive REACh requires the retesting of more than 30,000 chemicals for different toxicological endpoints, including sensitization, requiring vast numbers of animals. Therefore, alternative methods are urgently needed to eventually replace animal testing. Here, we summarize the outcome of an expert meeting in Rome on 7 November 2009 on the development of T-cell-based in vitro assays as tools in immunotoxicology to identify hazardous chemicals and drugs. In addition, we provide an overview of the development of the field over the last two decades.
Resumo:
Background Many medical exams use 5 options for multiple choice questions (MCQs), although the literature suggests that 3 options are optimal. Previous studies on this topic have often been based on non-medical examinations, so we sought to analyse rarely selected, 'non-functional' distractors (NF-D) in high stakes medical examinations, and their detection by item authors as well as psychometric changes resulting from a reduction in the number of options. Methods Based on Swiss Federal MCQ examinations from 2005-2007, the frequency of NF-D (selected by <1% or <5% of the candidates) was calculated. Distractors that were chosen the least or second least were identified and candidates who chose them were allocated to the remaining options using two extreme assumptions about their hypothetical behaviour: In case rarely selected distractors were eliminated, candidates could randomly choose another option - or purposively choose the correct answer, from which they had originally been distracted. In a second step, 37 experts were asked to mark the least plausible options. The consequences of a reduction from 4 to 3 or 2 distractors - based on item statistics or on the experts' ratings - with respect to difficulty, discrimination and reliability were modelled. Results About 70% of the 5-option-items had at least 1 NF-D selected by <1% of the candidates (97% for NF-Ds selected by <5%). Only a reduction to 2 distractors and assuming that candidates would switch to the correct answer in the absence of a 'non-functional' distractor led to relevant differences in reliability and difficulty (and to a lesser degree discrimination). The experts' ratings resulted in slightly greater changes compared to the statistical approach. Conclusions Based on item statistics and/or an expert panel's recommendation, the choice of a varying number of 3-4 (or partly 2) plausible distractors could be performed without marked deteriorations in psychometric characteristics.