871 resultados para Data detection
Resumo:
Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.
Resumo:
The continuous and swift progression of both wireless and wired communication technologies in today's world owes its success to the foundational systems established earlier. These systems serve as the building blocks that enable the enhancement of services to cater to evolving requirements. Studying the vulnerabilities of previously designed systems and their current usage leads to the development of new communication technologies replacing the old ones such as GSM-R in the railway field. The current industrial research has a specific focus on finding an appropriate telecommunication solution for railway communications that will replace the GSM-R standard which will be switched off in the next years. Various standardization organizations are currently exploring and designing a radiofrequency technology based standard solution to serve railway communications in the form of FRMCS (Future Railway Mobile Communication System) to substitute the current GSM-R. Bearing on this topic, the primary strategic objective of the research is to assess the feasibility to leverage on the current public network technologies such as LTE to cater to mission and safety critical communication for low density lines. The research aims to identify the constraints, define a service level agreement with telecom operators, and establish the necessary implementations to make the system as reliable as possible over an open and public network, while considering safety and cybersecurity aspects. The LTE infrastructure would be utilized to transmit the vital data for the communication of a railway system and to gather and transmit all the field measurements to the control room for maintenance purposes. Given the significance of maintenance activities in the railway sector, the ongoing research includes the implementation of a machine learning algorithm to detect railway equipment faults, reducing time and human analysis errors due to the large volume of measurements from the field.
Resumo:
The Cherenkov Telescope Array (CTA) will be the next-generation ground-based observatory to study the universe in the very-high-energy domain. The observatory will rely on a Science Alert Generation (SAG) system to analyze the real-time data from the telescopes and generate science alerts. The SAG system will play a crucial role in the search and follow-up of transients from external alerts, enabling multi-wavelength and multi-messenger collaborations. It will maximize the potential for the detection of the rarest phenomena, such as gamma-ray bursts (GRBs), which are the science case for this study. This study presents an anomaly detection method based on deep learning for detecting gamma-ray burst events in real-time. The performance of the proposed method is evaluated and compared against the Li&Ma standard technique in two use cases of serendipitous discoveries and follow-up observations, using short exposure times. The method shows promising results in detecting GRBs and is flexible enough to allow real-time search for transient events on multiple time scales. The method does not assume background nor source models and doe not require a minimum number of photon counts to perform analysis, making it well-suited for real-time analysis. Future improvements involve further tests, relaxing some of the assumptions made in this study as well as post-trials correction of the detection significance. Moreover, the ability to detect other transient classes in different scenarios must be investigated for completeness. The system can be integrated within the SAG system of CTA and deployed on the onsite computing clusters. This would provide valuable insights into the method's performance in a real-world setting and be another valuable tool for discovering new transient events in real-time. Overall, this study makes a significant contribution to the field of astrophysics by demonstrating the effectiveness of deep learning-based anomaly detection techniques for real-time source detection in gamma-ray astronomy.
Resumo:
In medicine, innovation depends on a better knowledge of the human body mechanism, which represents a complex system of multi-scale constituents. Unraveling the complexity underneath diseases proves to be challenging. A deep understanding of the inner workings comes with dealing with many heterogeneous information. Exploring the molecular status and the organization of genes, proteins, metabolites provides insights on what is driving a disease, from aggressiveness to curability. Molecular constituents, however, are only the building blocks of the human body and cannot currently tell the whole story of diseases. This is why nowadays attention is growing towards the contemporary exploitation of multi-scale information. Holistic methods are then drawing interest to address the problem of integrating heterogeneous data. The heterogeneity may derive from the diversity across data types and from the diversity within diseases. Here, four studies conducted data integration using customly designed workflows that implement novel methods and views to tackle the heterogeneous characterization of diseases. The first study devoted to determine shared gene regulatory signatures for onco-hematology and it showed partial co-regulation across blood-related diseases. The second study focused on Acute Myeloid Leukemia and refined the unsupervised integration of genomic alterations, which turned out to better resemble clinical practice. In the third study, network integration for artherosclerosis demonstrated, as a proof of concept, the impact of network intelligibility when it comes to model heterogeneous data, which showed to accelerate the identification of new potential pharmaceutical targets. Lastly, the fourth study introduced a new method to integrate multiple data types in a unique latent heterogeneous-representation that facilitated the selection of important data types to predict the tumour stage of invasive ductal carcinoma. The results of these four studies laid the groundwork to ease the detection of new biomarkers ultimately beneficial to medical practice and to the ever-growing field of Personalized Medicine.
Resumo:
In this thesis, we investigate the role of applied physics in epidemiological surveillance through the application of mathematical models, network science and machine learning. The spread of a communicable disease depends on many biological, social, and health factors. The large masses of data available make it possible, on the one hand, to monitor the evolution and spread of pathogenic organisms; on the other hand, to study the behavior of people, their opinions and habits. Presented here are three lines of research in which an attempt was made to solve real epidemiological problems through data analysis and the use of statistical and mathematical models. In Chapter 1, we applied language-inspired Deep Learning models to transform influenza protein sequences into vectors encoding their information content. We then attempted to reconstruct the antigenic properties of different viral strains using regression models and to identify the mutations responsible for vaccine escape. In Chapter 2, we constructed a compartmental model to describe the spread of a bacterium within a hospital ward. The model was informed and validated on time series of clinical measurements, and a sensitivity analysis was used to assess the impact of different control measures. Finally (Chapter 3) we reconstructed the network of retweets among COVID-19 themed Twitter users in the early months of the SARS-CoV-2 pandemic. By means of community detection algorithms and centrality measures, we characterized users’ attention shifts in the network, showing that scientific communities, initially the most retweeted, lost influence over time to national political communities. In the Conclusion, we highlighted the importance of the work done in light of the main contemporary challenges for epidemiological surveillance. In particular, we present reflections on the importance of nowcasting and forecasting, the relationship between data and scientific research, and the need to unite the different scales of epidemiological surveillance.
Resumo:
Questa tesi si ispira a lavori precedentemente portati avanti da altri studenti e si pone il problema della possibilit\`a di riconoscere se uno smartphone \`e utilizzato da un utente mentre esso si trova alla guida di un'autovettura. In essa verranno presentati vari metodi per risolvere questo problema di Machine Learning, ovvero realizzazione di dataset per l'allenamento di modelli e creazione e allenamento di modelli stessi, dediti al riconoscimento di un problema di classificazione binaria e riconoscimento di oggetti tramite Object Detection. Il cercare di riconoscere se l'utente \`e alla guida o meno, avverr\`a tramite l'output della fotocamera frontale dello smartphone, quindi lavoreremo su immagini, video e frame. Arriveremo a riconoscere la posizione della persona rappresentata da questi fotogrammi tramite un modello di Object Detection, che riconosce cintura e finestrino e determina se sono appartenenti al sedile e alla posizione del conducente o del passeggero. Vedremo alla fine, attraverso un'attenta analisi dei risultati ottenuti su ben 8 video diversi che saranno divisi in molti frame, che si ottengono risultati molto interessanti, dai quali si pu\`o prendere spunto per la creazione di un importante sistema di sicurezza alla guida.
Resumo:
Our objective for this thesis work was the deployment of a Neural Network based approach for video object detection on board a nano-drone. Furthermore, we have studied some possible extensions to exploit the temporal nature of videos to improve the detection capabilities of our algorithm. For our project, we have utilized the Mobilenetv2/v3SSDLite due to their limited computational and memory requirements. We have trained our networks on the IMAGENET VID 2015 dataset and to deploy it onto the nano-drone we have used the NNtool and Autotiler tools by GreenWaves. To exploit the temporal nature of video data we have tried different approaches: the introduction of an LSTM based convolutional layer in our architecture, the introduction of a Kalman filter based tracker as a postprocessing step to augment the results of our base architecture. We have obtain a total improvement in our performances of about 2.5 mAP with the Kalman filter based method(BYTE). Our detector run on a microcontroller class processor on board the nano-drone at 1.63 fps.
Resumo:
Il rilevamento di intrusioni nel contesto delle pratiche di Network Security Monitoring è il processo attraverso cui, passando per la raccolta e l'analisi di dati prodotti da una o più fonti di varia natura, (p.e. copie del traffico di rete, copie dei log degli applicativi/servizi, etc..) vengono identificati, correlati e analizzati eventi di sicurezza con l'obiettivo di rilevare potenziali tenativi di compromissione al fine di proteggere l'asset tecnologico all'interno di una data infrastruttura di rete. Questo processo è il prodotto di una combinazione di hardware, software e fattore umano. Spetta a quest'ultimo nello specifico il compito più arduo, ovvero quello di restare al passo con una realtà in continua crescita ed estremamente dinamica: il crimine informatico. Spetta all'analista filtrare e analizzare le informazioni raccolte in merito per contestualizzarle successivamente all'interno della realta che intende proteggere, con il fine ultimo di arricchire e perfezionare le logiche di rilevamento implementate sui sistemi utilizzati. È necessario comprendere come il mantenimento e l'aggiornamento di questi sistemi sia un'attività che segue l'evolversi delle tecnologie e delle strategie di attacco. Un suo svolgimento efficacie ed efficiente risulta di primaria importanza per consentire agli analisti di focalizzare le proprie risorse sulle attività di investigazione di eventi di sicurezza, ricerca e aggiornamento delle logiche di rilevamento, minimizzando quelle ripetitive, "time consuming", e potenzialmente automatizzabili. Questa tesi ha come obiettivo quello di presentare un possibile approccio ad una gestione automatizzata e centralizzata di sistemi per il rilevamento delle intrusioni, ponendo particolare attenzione alle tecnologie IDS presenti sul panorama open source oltre a rapportare tra loro gli aspetti di scalabilità e personalizzazione che ci si trova ad affrontare quando la gestione viene estesa ad infrastrutture di rete eterogenee e distribuite.
Resumo:
Hybrid Organic-Inorganic Halide Perovskites (HOIPs) include a large class of materials described with the general formula ABX3, where A is an organic cation, B an inorganic cation and X an halide anion. HOIPs show excellent optoelectronic characteristics such as tunable band gap, high adsorption coefficient and great mobility life-time. A subclass of these materials, the so-called two- dimensional (2D) layered HOIPs, have emerged as potential alternatives to traditional 3D analogs to enhance the stability and increase performance of perovskite devices, with particular regard in the area of ionizing radiation detectors, where these materials have reached truly remarkable milestones. One of the key challenges for future development of efficient and stable 2D perovskite X-ray detector is a complete understanding of the nature of defects that lead to the formation of deep states. Deep states act as non-radiative recombination centers for charge carriers and are one of the factors that most hinder the development of efficient 2D HOIPs-based X-ray detectors. In this work, deep states in PEA2PbBr4 were studied through Photo-Induced Current Transient Spectroscopy (PICTS), a highly sensitive spectroscopic technique capable of detecting the presence of deep states in highly resistive ohmic materials, and characterizing their activation energy, capture cross section and, under stringent conditions, the concentration of these states. The evolution of deep states in PEA 2 PbBr 4 was evaluated after exposure of the material to high doses of ionizing radiation and during aging (one year). The data obtained allowed us to evaluate the contribution of ion migration in PEA2PbBr4. This work represents an important starting point for a better understanding of transport and recombination phenomena in 2D perovskites. To date, the PICTS technique applied to 2D perovskites has not yet been reported in the scientific literature.
Resumo:
The increasing number of extreme rainfall events, combined with the high population density and the imperviousness of the land surface, makes urban areas particularly vulnerable to pluvial flooding. In order to design and manage cities to be able to deal with this issue, the reconstruction of weather phenomena is essential. Among the most interesting data sources which show great potential are the observational networks of private sensors managed by citizens (crowdsourcing). The number of these personal weather stations is consistently increasing, and the spatial distribution roughly follows population density. Precisely for this reason, they perfectly suit this detailed study on the modelling of pluvial flood in urban environments. The uncertainty associated with these measurements of precipitation is still a matter of research. In order to characterise the accuracy and precision of the crowdsourced data, we carried out exploratory data analyses. A comparison between Netatmo hourly precipitation amounts and observations of the same quantity from weather stations managed by national weather services is presented. The crowdsourced stations have very good skills in rain detection but tend to underestimate the reference value. In detail, the accuracy and precision of crowd- sourced data change as precipitation increases, improving the spread going to the extreme values. Then, the ability of this kind of observation to improve the prediction of pluvial flooding is tested. To this aim, the simplified raster-based inundation model incorporated in the Saferplaces web platform is used for simulating pluvial flooding. Different precipitation fields have been produced and tested as input in the model. Two different case studies are analysed over the most densely populated Norwegian city: Oslo. The crowdsourced weather station observations, bias-corrected (i.e. increased by 25%), showed very good skills in detecting flooded areas.
Resumo:
City streets carry a lot of information that can be exploited to improve the quality of the services the citizens receive. For example, autonomous vehicles need to act accordingly to all the element that are nearby the vehicle itself, like pedestrians, traffic signs and other vehicles. It is also possible to use such information for smart city applications, for example to predict and analyze the traffic or pedestrian flows. Among all the objects that it is possible to find in a street, traffic signs are very important because of the information they carry. This information can in fact be exploited both for autonomous driving and for smart city applications. Deep learning and, more generally, machine learning models however need huge quantities to learn. Even though modern models are very good at gener- alizing, the more samples the model has, the better it can generalize between different samples. Creating these datasets organically, namely with real pictures, is a very tedious task because of the wide variety of signs available in the whole world and especially because of all the possible light, orientation conditions and con- ditions in general in which they can appear. In addition to that, it may not be easy to collect enough samples for all the possible traffic signs available, cause some of them may be very rare to find. Instead of collecting pictures manually, it is possible to exploit data aug- mentation techniques to create synthetic datasets containing the signs that are needed. Creating this data synthetically allows to control the distribution and the conditions of the signs in the datasets, improving the quality and quantity of training data that is going to be used. This thesis work is about using copy-paste data augmentation to create synthetic data for the traffic sign recognition task.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
The article seeks to investigate patterns of performance and relationships between grip strength, gait speed and self-rated health, and investigate the relationships between them, considering the variables of gender, age and family income. This was conducted in a probabilistic sample of community-dwelling elderly aged 65 and over, members of a population study on frailty. A total of 689 elderly people without cognitive deficit suggestive of dementia underwent tests of gait speed and grip strength. Comparisons between groups were based on low, medium and high speed and strength. Self-related health was assessed using a 5-point scale. The males and the younger elderly individuals scored significantly higher on grip strength and gait speed than the female and oldest did; the richest scored higher than the poorest on grip strength and gait speed; females and men aged over 80 had weaker grip strength and lower gait speed; slow gait speed and low income arose as risk factors for a worse health evaluation. Lower muscular strength affects the self-rated assessment of health because it results in a reduction in functional capacity, especially in the presence of poverty and a lack of compensatory factors.
Resumo:
Obstructive sleep apnea syndrome has a high prevalence among adults. Cephalometric variables can be a valuable method for evaluating patients with this syndrome. To correlate cephalometric data with the apnea-hypopnea sleep index. We performed a retrospective and cross-sectional study that analyzed the cephalometric data of patients followed in the Sleep Disorders Outpatient Clinic of the Discipline of Otorhinolaryngology of a university hospital, from June 2007 to May 2012. Ninety-six patients were included, 45 men, and 51 women, with a mean age of 50.3 years. A total of 11 patients had snoring, 20 had mild apnea, 26 had moderate apnea, and 39 had severe apnea. The distance from the hyoid bone to the mandibular plane was the only variable that showed a statistically significant correlation with the apnea-hypopnea index. Cephalometric variables are useful tools for the understanding of obstructive sleep apnea syndrome. The distance from the hyoid bone to the mandibular plane showed a statistically significant correlation with the apnea-hypopnea index.
Resumo:
To assess binocular detection grating acuity using the LEA GRATINGS test to establish age-related norms in healthy infants during their first 3 months of life. In this prospective, longitudinal study of healthy infants with clear red reflex at birth, responses to gratings were measured at 1, 2, and 3 months of age using LEA gratings at a distance of 28 cm. The results were recorded as detection grating acuity values, which were arranged in frequency tables and converted to a one-octave scale for statistical analysis. For the repeated measurements, analysis of variance (ANOVA) was used to compare the detection grating acuity results between ages. A total of 133 infants were included. The binocular responses to gratings showed development toward higher mean values and spatial frequencies, ranging from 0.55 ± 0.70 cycles per degree (cpd), or 1.74 ± 0.21 logMAR, in month 1 to 3.11 ± 0.54 cpd, or 0.98 ± 0.16 logMAR, in month 3. Repeated ANOVA indicated differences among grating acuity values in the three age groups. The LEA GRATINGS test allowed assessment of detection grating acuity and its development in a cohort of healthy infants during their first 3 months of life.