782 resultados para Olfactory Recognition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lo studio dell’intelligenza artificiale si pone come obiettivo la risoluzione di una classe di problemi che richiedono processi cognitivi difficilmente codificabili in un algoritmo per essere risolti. Il riconoscimento visivo di forme e figure, l’interpretazione di suoni, i giochi a conoscenza incompleta, fanno capo alla capacità umana di interpretare input parziali come se fossero completi, e di agire di conseguenza. Nel primo capitolo della presente tesi sarà costruito un semplice formalismo matematico per descrivere l’atto di compiere scelte. Il processo di “apprendimento” verrà descritto in termini della massimizzazione di una funzione di prestazione su di uno spazio di parametri per un ansatz di una funzione da uno spazio vettoriale ad un insieme finito e discreto di scelte, tramite un set di addestramento che descrive degli esempi di scelte corrette da riprodurre. Saranno analizzate, alla luce di questo formalismo, alcune delle più diffuse tecniche di artificial intelligence, e saranno evidenziate alcune problematiche derivanti dall’uso di queste tecniche. Nel secondo capitolo lo stesso formalismo verrà applicato ad una ridefinizione meno intuitiva ma più funzionale di funzione di prestazione che permetterà, per un ansatz lineare, la formulazione esplicita di un set di equazioni nelle componenti del vettore nello spazio dei parametri che individua il massimo assoluto della funzione di prestazione. La soluzione di questo set di equazioni sarà trattata grazie al teorema delle contrazioni. Una naturale generalizzazione polinomiale verrà inoltre mostrata. Nel terzo capitolo verranno studiati più nel dettaglio alcuni esempi a cui quanto ricavato nel secondo capitolo può essere applicato. Verrà introdotto il concetto di grado intrinseco di un problema. Verranno inoltre discusse alcuni accorgimenti prestazionali, quali l’eliminazione degli zeri, la precomputazione analitica, il fingerprinting e il riordino delle componenti per lo sviluppo parziale di prodotti scalari ad alta dimensionalità. Verranno infine introdotti i problemi a scelta unica, ossia quella classe di problemi per cui è possibile disporre di un set di addestramento solo per una scelta. Nel quarto capitolo verrà discusso più in dettaglio un esempio di applicazione nel campo della diagnostica medica per immagini, in particolare verrà trattato il problema della computer aided detection per il rilevamento di microcalcificazioni nelle mammografie.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Antibody microarrays are of great research interest because of their potential application as biosensors for high-throughput protein and pathogen screening technologies. In this active area, there is still a need for novel structures and assemblies providing insight in binding interactions such as spherical and annulus-shaped protein structures, e.g. for the utilization of curved surfaces for the enhanced protein-protein interactions and detection of antigens. Therefore, the goal of the presented work was to establish a new technique for the label-free detection of bio-molecules and bacteria on topographically structured surfaces, suitable for antibody binding.rnIn the first part of the presented thesis, the fabrication of monolayers of inverse opals with 10 μm diameter and the immobilization of antibodies on their interior surface is described. For this purpose, several established methods for the linking of antibodies to glass, including Schiff bases, EDC/S-NHS chemistry and the biotin-streptavidin affinity system, were tested. The employed methods included immunofluorescence and image analysis by phase contrast microscopy. It could be shown that these methods were not successful in terms of antibody immobilization and adjacent bacteria binding. Hence, a method based on the application of an active-ester-silane was introduced. It showed promising results but also the need for further analysis. Especially the search for alternative antibodies addressing other antigens on the exterior of bacteria will be sought-after in the future.rnAs a consequence of the ability to control antibody-functionalized surfaces, a new technique employing colloidal templating to yield large scale (~cm2) 2D arrays of antibodies against E. coli K12, eGFP and human integrin αvβ3 on a versatile useful glass surface is presented. The antibodies were swept to reside around the templating microspheres during solution drying, and physisorbed on the glass. After removing the microspheres, the formation of annuli-shaped antibody structures was observed. The preserved antibody structure and functionality is shown by binding the specific antigens and secondary antibodies. The improved detection of specific bacteria from a crude solution compared to conventional “flat” antibody surfaces and the setting up of an integrin-binding platform for targeted recognition and surface interactions of eukaryotic cells is demonstrated. The structures were investigated by atomic force, confocal and fluorescence microscopy. Operational parameters like drying time, temperature, humidity and surfactants were optimized to obtain a stable antibody structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of the bio-recognition phenomena behind a biological process is nowadays considered a useful tool to deeply understand physiological mechanisms allowing the discovery of novel biological target and the development of new lead candidates. Moreover, understanding this kind of phenomena can be helpful in characterizing absorption, distribution, metabolism, elimination and toxicity properties of a new drug (ADMET parameters). Recent estimations show that about half of all drugs in development fail to make it to the market because of ADMET deficiencies; thus a rapid determination of ADMET parameters in early stages of drug discovery would save money and time, allowing to choose the better compound and to eliminate any losers. The monitoring of drug binding to plasma proteins is becoming essential in the field of drug discovery to characterize the drug distribution in human body. Human serum albumin (HSA) is the most abundant protein in plasma playing a fundamental role in the transport of drugs, metabolites and endogenous factors; so the study of the binding mechanism to HSA has become crucial to the early characterization of the pharmacokinetic profile of new potential leads. Furthermore, most of the distribution experiments carried out in vivo are performed on animals. Hence it is interesting to determine the binding of new compounds to albumins from different species to evaluate the reliability of extrapolating the distribution data obtained in animals to humans. It is clear how the characterization of interactions between proteins and drugs determines a growing need of methodologies to study any specific molecular event. A wide variety of biochemical techniques have been applied to this purpose. High-performance liquid affinity chromatography, circular dichroism and optical biosensor represent three techniques that can be able to elucidate the interaction of a new drug with its target and with others proteins that could interfere with ADMET parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il progetto Eye-Trauma si colloca all'interno dello sviluppo di un simulatore chirurgico per traumi alla zona oculare, sviluppato in collaborazione con Simulation Group in Boston, Harvard Medical School e Massachusetts General Hospital. Il simulatore presenta un busto in silicone fornito di moduli intercambiabili della zona oculare, per simulare diversi tipi di trauma. L'utilizzatore è chiamato ad eseguire la procedura medica di saturazione tramite degli strumenti chirurgici su cui sono installati dei sensori di forza e di apertura. I dati collezionati vengono utilizzati all'interno del software per il riconoscimento dei gesti e il controllo real-time della performance. L'algoritmo di gesture recognition, da me sviluppato, si basa sul concetto di macchine a stati; la transizione tra gli stati avviene in base agli eventi rilevati dal simulatore.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flowers attract honeybees using colour and scent signals. Bimodality (having both scent and colour) in flowers leads to increased visitation rates, but how the signals influence each other in a foraging situation is still quite controversial. We studied four basic questions: When faced with conflicting scent and colour information, will bees choose by scent and ignore the “wrong” colour, or vice versa? To get to the bottom of this question, we trained bees on scent-colour combination AX (rewarded) versus BY (unrewarded) and tested them on AY (previously rewarded colour and unrewarded scent) versus BX (previously rewarded scent and unrewarded colour). It turned out that the result depends on stimulus quality: if the colours are very similar (unsaturated blue and blue-green), bees choose by scent. If they are very different (saturated blue and yellow), bees choose by colour. We used the same scents, lavender and rosemary, in both cases. Our second question was: Are individual bees hardwired to use colour and ignore scent (or vice versa), or can this behaviour be modified, depending on which cue is more readily available in the current foraging context? To study this question, we picked colour-preferring bees and gave them extra training on scent-only stimuli. Afterwards, we tested if their preference had changed, and if they still remembered the scent stimulus they had originally used as their main cue. We came to the conclusion that a colour preference can be reversed through scent-only training. We also gave scent-preferring bees extra training on colour-only stimuli, and tested for a change in their preference. The number of animals tested was too small for statistical tests (n = 4), but a common tendency suggested that colour-only training leads to a preference for colour. A preference to forage by a certain sensory modality therefore appears to be not fixed but flexible, and adapted to the bee’s surroundings. Our third question was: Do bees learn bimodal stimuli as the sum of their parts (elemental learning), or as a new stimulus which is different from the sum of the components’ parts (configural learning)? We trained bees on bimodal stimuli, then tested them on the colour components only, and the scent components only. We performed this experiment with a similar colour set (unsaturated blue and blue-green, as above), and a very different colour set (saturated blue and yellow), but used lavender and rosemary for scent stimuli in both cases. Our experiment yielded unexpected results: with the different colours, the results were best explained by elemental learning, but with the similar colour set, bees exhibited configural learning. Still, their memory of the bimodal compound was excellent. Finally, we looked at reverse-learning. We reverse-trained bees with bimodal stimuli to find out whether bimodality leads to better reverse-learning compared to monomodal stimuli. We trained bees on AX (rewarded) versus BY (unrewarded), then on AX (unrewarded) versus BY (rewarded), and finally on AX (rewarded) and BY (unrewarded) again. We performed this experiment with both colour sets, always using the same two scents (lavender and rosemary). It turned out that bimodality does not help bees “see the pattern” and anticipate the switch. Generally, bees trained on the different colour set performed better than bees trained on the similar colour set, indicating that stimulus salience influences reverse-learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One to three percent of patients exposed to intravenously injected iodinated contrast media (CM) develop delayed hypersensitivity reactions. Positive patch test reactions, immunohistological findings, and CM-specific proliferation of T cells in vitro suggest a pathogenetic role for T cells. We have previously demonstrated that CM-specific T cell clones (TCCs) show a broad range of cross-reactivity to different CM. However, the mechanism of specific CM recognition by T cell receptors (TCRs) has not been analysed so far.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is conflicting evidence whether Parkinson's disease (PD) is associated with impaired recognition memory and which of its underlying processes, namely recollection and familiarity, is more affected by the disease. The present study explored the contribution of recollection and familiarity to verbal recognition memory performance in 14 nondemented PD patients and a healthy control group with two different methods: (i) the word-frequency mirror effect, and (ii) Remember/Know judgments. Overall, recognition memory of patients was intact. The word-frequency mirror effect was observed both in patients and controls: Hit rates were higher and false alarm rates were lower for low-frequency compared to high-frequency words. However, Remember/Know judgments indicated normal recollection, but impaired familiarity. Our findings suggest that mild to moderate PD patients are selectively impaired at familiarity whereas recollection and overall recognition memory are intact.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the last 10 years several molecular markers have been established as useful tools among the armamentarium of a hematologist. As a consequence, the number of performed hematologic molecular analyses has immensely increased. Often, such tests replace or complement other laboratory methods. Molecular markers can be useful in many ways: they can serve for diagnostics, describe the prognostic profile, predict which types of drugs are indicated, and can be used for the therapeutic monitoring of the patient to indicate an adequate response or predict resistance or relapse of the disease. Many markers fulfill more than one of these aspects. Most important, however, is the right choice of analyses at the right time-points!

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supramolecular two-dimensional engineering epitomizes the design of complex molecular architectures through recognition events in multicomponent self-assembly. Despite being the subject of in-depth experimental studies, such articulated phenomena have not been yet elucidated in time and space with atomic precision. Here we use atomistic molecular dynamics to simulate the recognition of complementary hydrogen-bonding modules forming 2D porous networks on graphite. We describe the transition path from the melt to the crystalline hexagonal phase and show that self-assembly proceeds through a series of intermediate states featuring a plethora of polygonal types. Finally, we design a novel bicomponent system possessing kinetically improved self-healing ability in silico, thus demonstrating that a priori engineering of 2D self-assembly is possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Olfactory impairment has been reported in drug-induced parkinsonism (DIP), but the relationship between dopaminergic dysfunction and smell deficits in DIP patients has not been characterized. To this end, we studied 16 DIP patients and 13 patients affected by Parkinson's disease (PD) using the "Sniffin' Sticks" test and [(123)I] FP-CIT SPECT (single-photon emission computed tomography). DIP patients were divided based on normal (n = 9) and abnormal (n = 7) putamen dopamine transporter binding. Nineteen healthy age- and sex-matched subjects served as controls of smell function. Patients with DIP and pathological putamen uptake had abnormal olfactory function. In this group of patients, olfactory TDI scores (odor threshold, discrimination and identification) correlated significantly with putamen uptake values, as observed in PD patients. By contrast, DIP patients with normal putamen uptake showed odor functions-with the exception of the threshold subtest-similar to control subjects. In this group of patients, no significant correlation was observed between olfactory TDI scores and putamen uptake values. The results of our study suggest that the presence of smell deficits in DIP patients might be more associated with dopaminergic loss rather than with a drug-mediated dopamine receptor blockade. These preliminary results might have prognostic and therapeutic implications, as abnormalities in these individuals may be suggestive of an underlying PD-like neurodegenerative process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mapping and ablation of atrial tachycardias (ATs) secondary to catheter ablation of atrial fibrillation (AF) is often challenging due to the complex atrial substrate, different AT mechanisms, and potential origin not only in the left atrium (LA) but also from the right atrium (RA) and the adjacent thoracic veins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chemicals can elicit T-cell-mediated diseases such as allergic contact dermatitis and adverse drug reactions. Therefore, testing of chemicals, drugs and protein allergens for hazard identification and risk assessment is essential in regulatory toxicology. The seventh amendment of the EU Cosmetics Directive now prohibits the testing of cosmetic ingredients in mice, guinea pigs and other animal species to assess their sensitizing potential. In addition, the EU Chemicals Directive REACh requires the retesting of more than 30,000 chemicals for different toxicological endpoints, including sensitization, requiring vast numbers of animals. Therefore, alternative methods are urgently needed to eventually replace animal testing. Here, we summarize the outcome of an expert meeting in Rome on 7 November 2009 on the development of T-cell-based in vitro assays as tools in immunotoxicology to identify hazardous chemicals and drugs. In addition, we provide an overview of the development of the field over the last two decades.