935 resultados para face recognition,face detection,face verification,web application
Resumo:
The Everglades R-EMAP project for year 2005 produced large quantities of data collected at 232 sampling sites. Data collection and analysis is an on-going long-term activity conducted by scientists of different disciplines at irregular intervals of several years. The data sets collected for 2005 include bio-geo-chemical (including mercury and hydro period), fish, invertebrate, periphyton, and plant data. Each sampling site is associated with a location, a description of the site to provide a general overview and photographs to provide a pictorial impression. The Geographic Information Systems and Remote Sensing Center(GISRSC) at Florida International University (FIU) has designed and implemented an enterprise database for long-term storage of the project�s data in a central repository, providing the framework of data storage for the continuity of future sampling campaigns and allowing integration of new sample data as it becomes available. In addition GISRSC provides this interactive web application for easy, quick and effective retrieval and visualization of that data.
Resumo:
This paper presents a monitoring system devoted to small sized photovoltaic (PV) power plants. The system is characterized by: a high level of integration; a low cost, when compared to the cost of the PV system to be monitored; and an easy installation in the majority of the PV plants with installed power of some kW. The system is able to collect, store, process and display electrical and meteorological parameters that are crucial when monitoring PV facilities. The identification of failures in the PV system and the elaboration of performance analysis of such facilities are other important characteristics of the developed system. The access to the information about the monitored facilities is achieved by using a web application, which was developed with a focus on the mobile devices. In addition, there is the possibility of an integration between the developed monitoring system and the central supervision system of Martifer Solar (a company focused on the development, operation and maintenance of PV systems).
Resumo:
Con il termine "crowdsensing" si intende una tecnica in cui un folto gruppo di individui aventi dispositivi mobili acquisiscono e condividono dati di natura diversa in maniera collettiva, al fine di estrarre informazioni utili. Il concetto di Mobile Crowdsensing è molto recente e derivante dalle ultime innovazioni tecnologiche in materia di connettività online e cattura di dati di vario genere; pertanto non si trova attualmente una vera e propria applicazione in campo reale, la modellazione solo teorica e fin troppo specifica pone un limite alla conoscenza di un ambito che può rivelarsi molto utile ai fini di ricerca. YouCrowd è un piattaforma web che va ad implementare un sistema di crowdsourcing completo, in grado di leggere dati dai numerosi sensori di uno smartphone e condividerli, al fine di ottenere una remunerazione per gli utenti che completano una campagna. La web application vede la sua implementazione di base supportata da NodeJS e si configura come una piattaforma dinamica che varia la propria interfaccia con l'utente in base alle richieste di dati da parte degli administrators. Il test di YouCrowd ha coinvolto un buon numero di partecipanti più o meno esperti nell'utilizzo degli strumenti informatici, rivelando delle buone prestazioni in relazione alla difficoltà del task e alle prestazioni del device in test.
Resumo:
This thesis focuses on automating the time-consuming task of manually counting activated neurons in fluorescent microscopy images, which is used to study the mechanisms underlying torpor. The traditional method of manual annotation can introduce bias and delay the outcome of experiments, so the author investigates a deep-learning-based procedure to automatize this task. The author explores two of the main convolutional-neural-network (CNNs) state-of-the-art architectures: UNet and ResUnet family model, and uses a counting-by-segmentation strategy to provide a justification of the objects considered during the counting process. The author also explores a weakly-supervised learning strategy that exploits only dot annotations. The author quantifies the advantages in terms of data reduction and counting performance boost obtainable with a transfer-learning approach and, specifically, a fine-tuning procedure. The author released the dataset used for the supervised use case and all the pre-training models, and designed a web application to share both the counting process pipeline developed in this work and the models pre-trained on the dataset analyzed in this work.
Resumo:
La Convenzione delle Nazioni Unite sui Diritti delle Persone con Disabilità (UNCRPD) riconosce il diritto di tutte le persone al lavoro “gli Stati Parti adottano misure adeguate a garantire alle persone con disabilità, su base di uguaglianza con gli altri, l’accesso all’ambiente fisico, ai trasporti, all’informazione e alla comunicazione, compresi i sistemi e le tecnologie di informazione e comunicazione e ad altre attrezzature e servizi aperti o forniti al pubblico”(United Nation 2016 p.14). Nonostante i progressi (in ambito politico culturale) che si stanno compiendo in ambito internazionale in termini di pari opportunità e di inclusione, le persone in situazione di disabilità continuano a incontrare barriere che limitano la loro partecipazione attiva al mondo del lavoro. A partire da questo scenario, la ricerca si propone di indagare i bisogni (es. di accoglienza, di accesso al contesto fisico e digitale, di partecipazione nella vita dell’azienda ecc.) delle persone con disabilità e di sviluppare una applicazione digitale (web app), rivolta alle imprese, finalizzata a monitorare e a promuovere l'inclusione lavorativa. Ripercorrendo il modello di progettazione del design thinking e valorizzando un processo di ricerca basato su metodi misti (qualitativi e qualitativi) è stato ideato Job inclusion for all; un ambiente digitale fondato sull’adattamento di due strumenti di “metariflessione”: l’Index for inclusion job version e l’employment role mapping. Lo strumento digitale prototipato è stato testato e validato, durante l’ultimo anno di ricerca, da parte di una equipe multidisciplinare internazionale; tale processo ha consentito di raccogliere feedback (rispetto alla rilevanza e alla chiarezza degli item, rispetto ai punti di forza e di debolezza) che hanno consentito di migliorare e implementare la versione finale del prototipo di web app.
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
In recent decades, two prominent trends have influenced the data modeling field, namely network analysis and machine learning. This thesis explores the practical applications of these techniques within the domain of drug research, unveiling their multifaceted potential for advancing our comprehension of complex biological systems. The research undertaken during this PhD program is situated at the intersection of network theory, computational methods, and drug research. Across six projects presented herein, there is a gradual increase in model complexity. These projects traverse a diverse range of topics, with a specific emphasis on drug repurposing and safety in the context of neurological diseases. The aim of these projects is to leverage existing biomedical knowledge to develop innovative approaches that bolster drug research. The investigations have produced practical solutions, not only providing insights into the intricacies of biological systems, but also allowing the creation of valuable tools for their analysis. In short, the achievements are: • A novel computational algorithm to identify adverse events specific to fixed-dose drug combinations. • A web application that tracks the clinical drug research response to SARS-CoV-2. • A Python package for differential gene expression analysis and the identification of key regulatory "switch genes". • The identification of pivotal events causing drug-induced impulse control disorders linked to specific medications. • An automated pipeline for discovering potential drug repurposing opportunities. • The creation of a comprehensive knowledge graph and development of a graph machine learning model for predictions. Collectively, these projects illustrate diverse applications of data science and network-based methodologies, highlighting the profound impact they can have in supporting drug research activities.
Resumo:
A Digital Scholarly Edition is a conceptually and structurally sophisticated entity. Throughout the centuries, diverse methodologies have been employed to reconstruct a text transmitted through one or multiple sources, resulting in various edition types. With the advent of digital technology in philology, these practices have undergone a significant transformation, compelling scholars to reconsider their approach in light of the web. In the digital age, philologists are expected to possess (too) advanced technical skills to prepare interactive and enriched editions, even though, in most cases, only mechanical or documentary editions are published online. The Śivadharma Database is a web Content Management System (CMS) designed to facilitate the preparation, publication, and updating of Digital Scholarly Editions. By providing scholars with a user-friendly CRUD web application to reconstruct and annotate a text, they can prepare their textus with additional components such as apparatus, notes, translations, citations, and parallels. It is possible by leveraging an annotation system based on HTML and graph data structure. This choice is made because the text entity is multidimensional and multifaceted, even if its sequential presentation constrains it. In particular, editions of South Asian texts of the Śivadharma corpus, the case study of this research, contain a series of phenomena that are difficult to manage formally, such as overlapping hierarchies. Hence, it becomes necessary to establish the data structure best suited to represent this complexity. In Śivadharma Database, the textus is an HTML file readily displayable. Textual fragments, annotated via an interface without requiring philologists to write code and saved in the backend, form the atomic unit of multiple relationships organised in a graph database. This approach enables the formal representation of complex and overlapping textual phenomena, allowing for good annotation expressiveness with minimal effort to learn the relevant technologies during the editing workflow.
Resumo:
La tesi ha lo scopo di ricercare, esaminare ed implementare un sistema di Machine Learning, un Recommendation Systems per precisione, che permetta la racommandazione di documenti di natura giuridica, i quali sono già stati analizzati e categorizzati appropriatamente, in maniera ottimale, il cui scopo sarebbe quello di accompagnare un sistema già implementato di Information Retrieval, istanziato sopra una web application, che permette di ricercare i documenti giuridici appena menzionati.
Resumo:
L’elaborato presenta lo sviluppo di un’applicazione web che unisce il con- cetto di crowdsourcing, ovvero un paradigma in cui una colletività esegue una mansione per il raggiungimento di un obiettivo; e il concetto di crowd- sensing, paradigma nel quale un gruppo di persone, utilizzando il proprio dispositivo mobile, condivide dati, che possono essere successivamente stu- diati. SmartCrowd si pone come intermediario tra questi due modelli, e la sua implementazione permette di raccogliere dati da una “folla” che faccia uso dei dispositivi mobili, usufruendo di una piattaforma di crowdsourcing. Con SmartCrowd vengono create le campagne, ovvero insieme di attività svol- te dagli utenti finali; esiste un sistema di interazione con la piattaforma di crowdsourcing di Microworkers, volta al reclutamento delle persone; la con- divisione dei dati tramite smartphone, usando il sensore GPS, eseguita dagli utenti ed infine, tramite SmartCrowd è possibile analizzare i dati ricevuti e fare una valutazione positiva o negativa.
Resumo:
The newly inaugurated Navile District of the University of Bologna is a complex created along the Navile canal, that now houses various teaching and research activities for the disciplines of Chemistry, Industrial Chemistry, Pharmacy, Biotechnology and Astronomy. A Building Information Modeling system (BIM) gives staff of the Navile campus several ways to monitor buildings in the complex throughout their life cycle, one of which is the ability to access real-time environmental data such as room temperature, humidity, air composition, and more, thereby simplifying operations like finding faults and optimizing environmental resource usage. But smart features at Navile are not only available to the staff: AlmaMap Navile is a web application, whose development is documented in this thesis, that powers the public touch kiosks available throughout the campus, offering maps of the district and indications on how to reach buildings and spaces. Even if these two systems, BIM and AlmaMap, don't seem to have many similarities, they share the common intent of promoting awareness for informed decision making in the campus, and they do it while relying on web standards for communication. This opens up interesting possibilities, and is the idea behind AlmaMap Navile 2.0, an app that interfaces with the BIM system and combines real-time sensor data with a comfort calculation algorithm, giving users the ability not just to ask for directions to a space, but also to see its comfort level in advance and, should they want to, check environmental measurements coming from each sensor in a granular manner. The end result is a first step towards building a smart campus Digital Twin, that can support all the people who are part of the campus life in their daily activities, improving their efficiency and satisfaction, giving them the ability to make informed decisions, and promoting awareness and sustainability.
Resumo:
In this report, a face recognition system that is capable of detecting and recognizing frontal and rotated faces was developed. Two face recognition methods focusing on the aspect of pose invariance are presented and evaluated - the whole face approach and the component-based approach. The main challenge of this project is to develop a system that is able to identify faces under different viewing angles in realtime. The development of such a system will enhance the capability and robustness of current face recognition technology. The whole-face approach recognizes faces by classifying a single feature vector consisting of the gray values of the whole face image. The component-based approach first locates the facial components and extracts them. These components are normalized and combined into a single feature vector for classification. The Support Vector Machine (SVM) is used as the classifier for both approaches. Extensive tests with respect to the robustness against pose changes are performed on a database that includes faces rotated up to about 40 degrees in depth. The component-based approach clearly outperforms the whole-face approach on all tests. Although this approach isproven to be more reliable, it is still too slow for real-time applications. That is the reason why a real-time face recognition system using the whole-face approach is implemented to recognize people in color video sequences.
Resumo:
Introduction: Difficult tracheal intubation remains a constant and significant source of morbidity and mortality in anaesthetic practice. Insufficient airway assessment in the preoperative period continues to be a major cause of unanticipated difficult intubation. Although many risk factors have already been identified, preoperative airway evaluation is not always regarded as a standard procedure and the respective weight of each risk factor remains unclear. Moreover the predictive scores available are not sensitive, moderately specific and often operator-dependant. In order to improve the preoperative detection of patients at risk for difficult intubation, we developed a system for automated and objective evaluation of morphologic criteria of the face and neck using video recordings and advanced techniques borrowed from face recognition. Method and results: Frontal video sequences were recorded in 5 healthy volunteers. During the video recording, subjects were requested to perform maximal flexion-extension of the neck and to open wide the mouth with tongue pulled out. A robust and real-time face tracking system was then applied, allowing to automatically identify and map a grid of 55 control points on the face, which were tracked during head motion. These points located important features of the face, such as the eyebrows, the nose, the contours of the eyes and mouth, and the external contours, including the chin. Moreover, based on this face tracking, the orientation of the head could also be estimated at each frame of the video sequence. Thus, we could infer for each frame the pitch angle of the head pose (related to the vertical rotation of the head) and obtain the degree of head extension. Morphological criteria used in the most frequent cited predictive scores were also extracted, such as mouth opening, degree of visibility of the uvula or thyreo-mental distance. Discussion and conclusion: Preliminary results suggest the high feasibility of the technique. The next step will be the application of the same automated and objective evaluation to patients who will undergo tracheal intubation. The difficulties related to intubation will be then correlated to the biometric characteristics of the patients. The objective in mind is to analyze the biometrics data with artificial intelligence algorithms to build a highly sensitive and specific predictive test.
Resumo:
We present an example-based learning approach for locating vertical frontal views of human faces in complex scenes. The technique models the distribution of human face patterns by means of a few view-based "face'' and "non-face'' prototype clusters. At each image location, the local pattern is matched against the distribution-based model, and a trained classifier determines, based on the local difference measurements, whether or not a human face exists at the current image location. We provide an analysis that helps identify the critical components of our system.