774 resultados para Named entity recognition
Resumo:
Le reti veicolari, anche dette VANET, sono da tempo oggetto di studio. Durante il periodo di ricerca svolto presso l'Università della California Los Angeles (UCLA) è stato possibile studiare i protocolli adatti allo scambio di contenuti fra i veicoli secondo il paradigma del Named Data Networking (NDN). Il Named Data Networking rappresenta un nuovo modello di comunicazione per il reperimento dei contenuti all'interno della rete. Nelle VANET ogni veicolo è potenzialmente un fornitore di contenuti, oltre che un richiedente. L'infrastruttura di riferimento posta all'interno del campus universitario permette il reperimento di dati necessario allo studio del problema, non solo da un punto di vista pratico ma anche da un punto di vista teorico. Infatti, data la tipologia dei test e le difficoltà intrinseche che essi comportano, l'attività di simulazione svolge un ruolo importante per lo sviluppo e lo studio del protocollo all'interno delle reti veicolari. L'attività di ricerca svolta si articola nei seguenti aspetti: introduzione al nuovo paradigma di comunicazione: principi del Named Data Networking, funzionamento di NDN, reti veicolari, applicabilità di NDN alle VANET; modelli di mobilità per le reti veicolari: linee guida per la costruzione di un modello di mobilità, situazione attuale dei modelli disponibili, simulatori di rete, strumenti utilizzati e il loro funzionamento; attività di simulazione: pianificazione e implementazione di diverse tipologie di scenari di reti veicolari; analisi dei dati raccolti dalla fase precedente: vengono elaborati i dati raccolti e si cerca di catturarne gli aspetti più significativi. L'obiettivo è quello di condurre uno studio di fattibilità sull'applicazione di NDN alle reti mobili, in particolare alle reti veicolari in ambito urbano. Al momento in cui è iniziata la collaborazione con il gruppo di ricerca del Network Research Lab di UCLA, era da poco stata rilasciata la prima versione di NDN contenente l'estensione pensata per il veicolare, quindi non erano presenti in letteratura studi condotti per questo tipo di scenari. Lo scopo è quello di estrarre informazioni e ricavarne significative indicazioni sulle prestazioni del sistema.
Resumo:
Antibody microarrays are of great research interest because of their potential application as biosensors for high-throughput protein and pathogen screening technologies. In this active area, there is still a need for novel structures and assemblies providing insight in binding interactions such as spherical and annulus-shaped protein structures, e.g. for the utilization of curved surfaces for the enhanced protein-protein interactions and detection of antigens. Therefore, the goal of the presented work was to establish a new technique for the label-free detection of bio-molecules and bacteria on topographically structured surfaces, suitable for antibody binding.rnIn the first part of the presented thesis, the fabrication of monolayers of inverse opals with 10 μm diameter and the immobilization of antibodies on their interior surface is described. For this purpose, several established methods for the linking of antibodies to glass, including Schiff bases, EDC/S-NHS chemistry and the biotin-streptavidin affinity system, were tested. The employed methods included immunofluorescence and image analysis by phase contrast microscopy. It could be shown that these methods were not successful in terms of antibody immobilization and adjacent bacteria binding. Hence, a method based on the application of an active-ester-silane was introduced. It showed promising results but also the need for further analysis. Especially the search for alternative antibodies addressing other antigens on the exterior of bacteria will be sought-after in the future.rnAs a consequence of the ability to control antibody-functionalized surfaces, a new technique employing colloidal templating to yield large scale (~cm2) 2D arrays of antibodies against E. coli K12, eGFP and human integrin αvβ3 on a versatile useful glass surface is presented. The antibodies were swept to reside around the templating microspheres during solution drying, and physisorbed on the glass. After removing the microspheres, the formation of annuli-shaped antibody structures was observed. The preserved antibody structure and functionality is shown by binding the specific antigens and secondary antibodies. The improved detection of specific bacteria from a crude solution compared to conventional “flat” antibody surfaces and the setting up of an integrin-binding platform for targeted recognition and surface interactions of eukaryotic cells is demonstrated. The structures were investigated by atomic force, confocal and fluorescence microscopy. Operational parameters like drying time, temperature, humidity and surfactants were optimized to obtain a stable antibody structure.
Resumo:
Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries.
Resumo:
The study of the bio-recognition phenomena behind a biological process is nowadays considered a useful tool to deeply understand physiological mechanisms allowing the discovery of novel biological target and the development of new lead candidates. Moreover, understanding this kind of phenomena can be helpful in characterizing absorption, distribution, metabolism, elimination and toxicity properties of a new drug (ADMET parameters). Recent estimations show that about half of all drugs in development fail to make it to the market because of ADMET deficiencies; thus a rapid determination of ADMET parameters in early stages of drug discovery would save money and time, allowing to choose the better compound and to eliminate any losers. The monitoring of drug binding to plasma proteins is becoming essential in the field of drug discovery to characterize the drug distribution in human body. Human serum albumin (HSA) is the most abundant protein in plasma playing a fundamental role in the transport of drugs, metabolites and endogenous factors; so the study of the binding mechanism to HSA has become crucial to the early characterization of the pharmacokinetic profile of new potential leads. Furthermore, most of the distribution experiments carried out in vivo are performed on animals. Hence it is interesting to determine the binding of new compounds to albumins from different species to evaluate the reliability of extrapolating the distribution data obtained in animals to humans. It is clear how the characterization of interactions between proteins and drugs determines a growing need of methodologies to study any specific molecular event. A wide variety of biochemical techniques have been applied to this purpose. High-performance liquid affinity chromatography, circular dichroism and optical biosensor represent three techniques that can be able to elucidate the interaction of a new drug with its target and with others proteins that could interfere with ADMET parameters.
Resumo:
Il progetto Eye-Trauma si colloca all'interno dello sviluppo di un simulatore chirurgico per traumi alla zona oculare, sviluppato in collaborazione con Simulation Group in Boston, Harvard Medical School e Massachusetts General Hospital. Il simulatore presenta un busto in silicone fornito di moduli intercambiabili della zona oculare, per simulare diversi tipi di trauma. L'utilizzatore è chiamato ad eseguire la procedura medica di saturazione tramite degli strumenti chirurgici su cui sono installati dei sensori di forza e di apertura. I dati collezionati vengono utilizzati all'interno del software per il riconoscimento dei gesti e il controllo real-time della performance. L'algoritmo di gesture recognition, da me sviluppato, si basa sul concetto di macchine a stati; la transizione tra gli stati avviene in base agli eventi rilevati dal simulatore.
Resumo:
Lysosomaler Transport kationischer Aminosäuren (KAS) stellt einen Rettungsweg in der Cystinose-Therapie dar. Ein solches Transportsystem wurde in humanen Hautfibroblasten beschrieben und mit System c benannt. Des Weiteren stellt lysosomales Arginin eine Substratquelle für die endotheliale NO-Synthase (eNOS) dar. Das von der eNOS gebildete NO ist ein wichtiges vasoprotektiv wirkendes Signalmolekül. Ziel war es daher, herauszufinden, ob Mitglieder der SLC7-Unterfamilie hCAT möglicherweise System c repräsentieren.rnIn dieser Arbeit konnte ich die lysosomale Lokalisation verschiedener endogener, sowie als EGFP-Fusionsproteine überexprimierter CAT-Isoformen nachweisen. Mittels Fluoreszenz-mikroskopie wurde festgestellt, dass die in U373MG-Zellen überexprimierten Fusionsproteine hCAT-1.EGFP sowie SLC7A14.EGFP mit dem lysosomalen Fluoreszenz-Farbstoff LysoTracker co-lokalisieren. Eine Lokalisation in Mitochondrien oder dem endoplasmatischem Retikulum konnte mit entsprechenden Fluoreszenz-Farbstoffen ausgeschlossen werden. Zusätzlich reicherten sich die überexprimierten Proteine hCAT-1.EGFP, hCAT-2B.EGFP und SLC7A14.EGFP in der lysosomalen Fraktion C aus U373MG-Zellen zusammen mit den lysosomalen Markern LAMP-1 und Cathepsin D an. Gleiches galt für den endogenen hCAT-1 in der lysosomalen Fraktion C aus EA.hy926- und U373MG-Zellen sowie für den SLC7A14 in den humanen Hautfibroblasten FCys5. Mit dem im Rahmen dieser Arbeit generierte Antikörper gegen natives SLC7A14 konnte erstmals die endogene Expression und Lokalisation von SLC7A14 in verschiedenen Zelltypen analysiert werden.rnObwohl eine Herunterregulation des hCAT-1 in EA.hy926-Endothelzellen nicht zu einer Reduktion der Versorgung der eNOS mit lysosomalem Arginin führte, ist eine Funktion von hCAT-1 im Lysosom wahrscheinlich. Sowohl die [3H]Arginin- als auch die [3H]Lysin-Aufnahme der Fraktion C aus U373MG-hCAT-1.EGFP war signifikant höher als in die Fraktion C aus EGFP-Kontrollzellen. Dies konnte ebenfalls für den hCAT-2B.EGFP gezeigt werden. Zusätzlich zeigten lysosomale Proben aus U373MG-hCAT-2B.EGFP-Zellen in der SSM-basierten Elektrophysiologie eine elektrogene Transportaktivität für Arginin. Das Protein SLC7A14.EGFP zeigte in keiner der beiden durchgeführten Transportstudien eine Aktivität. Dies war unerwartet, da die aus der Diplomarbeit stammende und im Rahmen dieser Dissertation erweiterte Charakterisierung der hCAT-2/A14_BK-Chimäre, die die „funktionelle Domäne“ des SLC7A14 im Rückgrat des hCAT-2 trug, zuvor den Verdacht erhärtet hatte, dass SLC7A14 ein lysosomal lokalisierter Transporter für KAS sein könnte. Diese Studien zeigten allerding erstmals, dass die „funktionelle Domäne“ der hCATs die pH-Abhängigkeit vermittelt und eine Rolle in der Substraterkennung spielt.rnZukünftig soll weiter versucht werden auch endogen eine Transportaktivität der hCATs für KAS im Lysosom nachzuweisen und das Substrat für das intrazellulär lokalisierte Waisen-Protein SLC7A14 zu finden. Eine mögliche Rolle könnte SLC7A14 als Transporter für Neurotransmitter spielen, da eine sehr prominente Expression im ZNS festgestellt wurde.rn
Resumo:
In this thesis we are going to talk about technologies which allow us to approach sentiment analysis on newspapers articles. The final goal of this work is to help social scholars to do content analysis on big corpora of texts in a faster way thanks to the support of automatic text classification.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
One to three percent of patients exposed to intravenously injected iodinated contrast media (CM) develop delayed hypersensitivity reactions. Positive patch test reactions, immunohistological findings, and CM-specific proliferation of T cells in vitro suggest a pathogenetic role for T cells. We have previously demonstrated that CM-specific T cell clones (TCCs) show a broad range of cross-reactivity to different CM. However, the mechanism of specific CM recognition by T cell receptors (TCRs) has not been analysed so far.
Resumo:
There is conflicting evidence whether Parkinson's disease (PD) is associated with impaired recognition memory and which of its underlying processes, namely recollection and familiarity, is more affected by the disease. The present study explored the contribution of recollection and familiarity to verbal recognition memory performance in 14 nondemented PD patients and a healthy control group with two different methods: (i) the word-frequency mirror effect, and (ii) Remember/Know judgments. Overall, recognition memory of patients was intact. The word-frequency mirror effect was observed both in patients and controls: Hit rates were higher and false alarm rates were lower for low-frequency compared to high-frequency words. However, Remember/Know judgments indicated normal recollection, but impaired familiarity. Our findings suggest that mild to moderate PD patients are selectively impaired at familiarity whereas recollection and overall recognition memory are intact.
Resumo:
During the last 10 years several molecular markers have been established as useful tools among the armamentarium of a hematologist. As a consequence, the number of performed hematologic molecular analyses has immensely increased. Often, such tests replace or complement other laboratory methods. Molecular markers can be useful in many ways: they can serve for diagnostics, describe the prognostic profile, predict which types of drugs are indicated, and can be used for the therapeutic monitoring of the patient to indicate an adequate response or predict resistance or relapse of the disease. Many markers fulfill more than one of these aspects. Most important, however, is the right choice of analyses at the right time-points!
Resumo:
Supramolecular two-dimensional engineering epitomizes the design of complex molecular architectures through recognition events in multicomponent self-assembly. Despite being the subject of in-depth experimental studies, such articulated phenomena have not been yet elucidated in time and space with atomic precision. Here we use atomistic molecular dynamics to simulate the recognition of complementary hydrogen-bonding modules forming 2D porous networks on graphite. We describe the transition path from the melt to the crystalline hexagonal phase and show that self-assembly proceeds through a series of intermediate states featuring a plethora of polygonal types. Finally, we design a novel bicomponent system possessing kinetically improved self-healing ability in silico, thus demonstrating that a priori engineering of 2D self-assembly is possible.
Resumo:
Mapping and ablation of atrial tachycardias (ATs) secondary to catheter ablation of atrial fibrillation (AF) is often challenging due to the complex atrial substrate, different AT mechanisms, and potential origin not only in the left atrium (LA) but also from the right atrium (RA) and the adjacent thoracic veins.
Resumo:
Chemicals can elicit T-cell-mediated diseases such as allergic contact dermatitis and adverse drug reactions. Therefore, testing of chemicals, drugs and protein allergens for hazard identification and risk assessment is essential in regulatory toxicology. The seventh amendment of the EU Cosmetics Directive now prohibits the testing of cosmetic ingredients in mice, guinea pigs and other animal species to assess their sensitizing potential. In addition, the EU Chemicals Directive REACh requires the retesting of more than 30,000 chemicals for different toxicological endpoints, including sensitization, requiring vast numbers of animals. Therefore, alternative methods are urgently needed to eventually replace animal testing. Here, we summarize the outcome of an expert meeting in Rome on 7 November 2009 on the development of T-cell-based in vitro assays as tools in immunotoxicology to identify hazardous chemicals and drugs. In addition, we provide an overview of the development of the field over the last two decades.
Resumo:
Background Many medical exams use 5 options for multiple choice questions (MCQs), although the literature suggests that 3 options are optimal. Previous studies on this topic have often been based on non-medical examinations, so we sought to analyse rarely selected, 'non-functional' distractors (NF-D) in high stakes medical examinations, and their detection by item authors as well as psychometric changes resulting from a reduction in the number of options. Methods Based on Swiss Federal MCQ examinations from 2005-2007, the frequency of NF-D (selected by <1% or <5% of the candidates) was calculated. Distractors that were chosen the least or second least were identified and candidates who chose them were allocated to the remaining options using two extreme assumptions about their hypothetical behaviour: In case rarely selected distractors were eliminated, candidates could randomly choose another option - or purposively choose the correct answer, from which they had originally been distracted. In a second step, 37 experts were asked to mark the least plausible options. The consequences of a reduction from 4 to 3 or 2 distractors - based on item statistics or on the experts' ratings - with respect to difficulty, discrimination and reliability were modelled. Results About 70% of the 5-option-items had at least 1 NF-D selected by <1% of the candidates (97% for NF-Ds selected by <5%). Only a reduction to 2 distractors and assuming that candidates would switch to the correct answer in the absence of a 'non-functional' distractor led to relevant differences in reliability and difficulty (and to a lesser degree discrimination). The experts' ratings resulted in slightly greater changes compared to the statistical approach. Conclusions Based on item statistics and/or an expert panel's recommendation, the choice of a varying number of 3-4 (or partly 2) plausible distractors could be performed without marked deteriorations in psychometric characteristics.