957 resultados para Machine Approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modifications in vegetation cover can have an impact on the climate through changes in biogeochemical and biogeophysical processes. In this paper, the tree canopy cover percentage of a savannah-like ecosystem (montado/dehesa) was estimated at Landsat pixel level for 2011, and the role of different canopy cover percentages on land surface albedo (LSA) and land surface temperature (LST) were analysed. A modelling procedure using a SGB machine-learning algorithm and Landsat 5-TM spectral bands and derived vegetation indices as explanatory variables, showed that the estimation of montado canopy cover was obtained with good agreement (R2 = 78.4%). Overall, montado canopy cover estimations showed that low canopy cover class (MT_1) is the most representative with 50.63% of total montado area. MODIS LSA and LST products were used to investigate the magnitude of differences in mean annual LSA and LST values between contrasting montado canopy cover percentages. As a result, it was found a significant statistical relationship between montado canopy cover percentage and mean annual surface albedo (R2 = 0.866, p < 0.001) and surface temperature (R2 = 0.942, p < 0.001). The comparisons between the four contrasting montado canopy cover classes showed marked differences in LSA (χ2 = 192.17, df = 3, p < 0.001) and LST (χ2 = 318.18, df = 3, p < 0.001). The highest montado canopy cover percentage (MT_4) generally had lower albedo than lowest canopy cover class, presenting a difference of −11.2% in mean annual albedo values. It was also showed that MT_4 and MT_3 are the cooler canopy cover classes, and MT_2 and MT_1 the warmer, where MT_1 class had a difference of 3.42 °C compared with MT_4 class. Overall, this research highlighted the role that potential changes in montado canopy cover may play in local land surface albedo and temperature variations, as an increase in these two biogeophysical parameters may potentially bring about, in the long term, local/regional climatic changes moving towards greater aridity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le tecniche di Machine Learning sono molto utili in quanto consento di massimizzare l’utilizzo delle informazioni in tempo reale. Il metodo Random Forests può essere annoverato tra le tecniche di Machine Learning più recenti e performanti. Sfruttando le caratteristiche e le potenzialità di questo metodo, la presente tesi di dottorato affronta due casi di studio differenti; grazie ai quali è stato possibile elaborare due differenti modelli previsionali. Il primo caso di studio si è incentrato sui principali fiumi della regione Emilia-Romagna, caratterizzati da tempi di risposta molto brevi. La scelta di questi fiumi non è stata casuale: negli ultimi anni, infatti, in detti bacini si sono verificati diversi eventi di piena, in gran parte di tipo “flash flood”. Il secondo caso di studio riguarda le sezioni principali del fiume Po, dove il tempo di propagazione dell’onda di piena è maggiore rispetto ai corsi d’acqua del primo caso di studio analizzato. Partendo da una grande quantità di dati, il primo passo è stato selezionare e definire i dati in ingresso in funzione degli obiettivi da raggiungere, per entrambi i casi studio. Per l’elaborazione del modello relativo ai fiumi dell’Emilia-Romagna, sono stati presi in considerazione esclusivamente i dati osservati; a differenza del bacino del fiume Po in cui ai dati osservati sono stati affiancati anche i dati di previsione provenienti dalla catena modellistica Mike11 NAM/HD. Sfruttando una delle principali caratteristiche del metodo Random Forests, è stata stimata una probabilità di accadimento: questo aspetto è fondamentale sia nella fase tecnica che in fase decisionale per qualsiasi attività di intervento di protezione civile. L'elaborazione dei dati e i dati sviluppati sono stati effettuati in ambiente R. Al termine della fase di validazione, gli incoraggianti risultati ottenuti hanno permesso di inserire il modello sviluppato nel primo caso studio all’interno dell’architettura operativa di FEWS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The job of a historian is to understand what happened in the past, resorting in many cases to written documents as a firsthand source of information. Text, however, does not amount to the only source of knowledge. Pictorial representations, in fact, have also accompanied the main events of the historical timeline. In particular, the opportunity of visually representing circumstances has bloomed since the invention of photography, with the possibility of capturing in real-time the occurrence of a specific events. Thanks to the widespread use of digital technologies (e.g. smartphones and digital cameras), networking capabilities and consequent availability of multimedia content, the academic and industrial research communities have developed artificial intelligence (AI) paradigms with the aim of inferring, transferring and creating new layers of information from images, videos, etc. Now, while AI communities are devoting much of their attention to analyze digital images, from an historical research standpoint more interesting results may be obtained analyzing analog images representing the pre-digital era. Within the aforementioned scenario, the aim of this work is to analyze a collection of analog documentary photographs, building upon state-of-the-art deep learning techniques. In particular, the analysis carried out in this thesis aims at producing two following results: (a) produce the date of an image, and, (b) recognizing its background socio-cultural context,as defined by a group of historical-sociological researchers. Given these premises, the contribution of this work amounts to: (i) the introduction of an historical dataset including images of “Family Album” among all the twentieth century, (ii) the introduction of a new classification task regarding the identification of the socio-cultural context of an image, (iii) the exploitation of different deep learning architectures to perform the image dating and the image socio-cultural context classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, manufacturing companies have been facing two significant challenges. First, digitalization imposes adopting Industry 4.0 technologies and allows creating smart, connected, self-aware, and self-predictive factories. Second, the attention on sustainability imposes to evaluate and reduce the impact of the implemented solutions from economic and social points of view. In manufacturing companies, the maintenance of physical assets assumes a critical role. Increasing the reliability and the availability of production systems leads to the minimization of systems’ downtimes; In addition, the proper system functioning avoids production wastes and potentially catastrophic accidents. Digitalization and new ICT technologies have assumed a relevant role in maintenance strategies. They allow assessing the health condition of machinery at any point in time. Moreover, they allow predicting the future behavior of machinery so that maintenance interventions can be planned, and the useful life of components can be exploited until the time instant before their fault. This dissertation provides insights on Predictive Maintenance goals and tools in Industry 4.0 and proposes a novel data acquisition, processing, sharing, and storage framework that addresses typical issues machine producers and users encounter. The research elaborates on two research questions that narrow down the potential approaches to data acquisition, processing, and analysis for fault diagnostics in evolving environments. The research activity is developed according to a research framework, where the research questions are addressed by research levers that are explored according to research topics. Each topic requires a specific set of methods and approaches; however, the overarching methodological approach presented in this dissertation includes three fundamental aspects: the maximization of the quality level of input data, the use of Machine Learning methods for data analysis, and the use of case studies deriving from both controlled environments (laboratory) and real-world instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Besides increasing the share of electric and hybrid vehicles, in order to comply with more stringent environmental protection limitations, in the mid-term the auto industry must improve the efficiency of the internal combustion engine and the well to wheel efficiency of the employed fuel. To achieve this target, a deeper knowledge of the phenomena that influence the mixture formation and the chemical reactions involving new synthetic fuel components is mandatory, but complex and time intensive to perform purely by experimentation. Therefore, numerical simulations play an important role in this development process, but their use can be effective only if they can be considered accurate enough to capture these variations. The most relevant models necessary for the simulation of the reacting mixture formation and successive chemical reactions have been investigated in the present work, with a critical approach, in order to provide instruments to define the most suitable approaches also in the industrial context, which is limited by time constraints and budget evaluations. To overcome these limitations, new methodologies have been developed to conjugate detailed and simplified modelling techniques for the phenomena involving chemical reactions and mixture formation in non-traditional conditions (e.g. water injection, biofuels etc.). Thanks to the large use of machine learning and deep learning algorithms, several applications have been revised or implemented, with the target of reducing the computing time of some traditional tasks by orders of magnitude. Finally, a complete workflow leveraging these new models has been defined and used for evaluating the effects of different surrogate formulations of the same experimental fuel on a proof-of-concept GDI engine model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Allostery is a phenomenon of fundamental importance in biology, allowing regulation of function and dynamic adaptability of enzymes and proteins. Despite the allosteric effect was first observed more than a century ago allostery remains a biophysical enigma, defined as the “second secret of life”. The challenge is mainly associated to the rather complex nature of the allosteric mechanisms, which manifests itself as the alteration of the biological function of a protein/enzyme (e.g. ligand/substrate binding at the active site) by binding of “other object” (“allos stereos” in Greek) at a site distant (> 1 nanometer) from the active site, namely the effector site. Thus, at the heart of allostery there is signal propagation from the effector to the active site through a dense protein matrix, with a fundamental challenge being represented by the elucidation of the physico-chemical interactions between amino acid residues allowing communicatio n between the two binding sites, i.e. the “allosteric pathways”. Here, we propose a multidisciplinary approach based on a combination of computational chemistry, involving molecular dynamics simulations of protein motions, (bio)physical analysis of allosteric systems, including multiple sequence alignments of known allosteric systems, and mathematical tools based on graph theory and machine learning that can greatly help understanding the complexity of dynamical interactions involved in the different allosteric systems. The project aims at developing robust and fast tools to identify unknown allosteric pathways. The characterization and predictions of such allosteric spots could elucidate and fully exploit the power of allosteric modulation in enzymes and DNA-protein complexes, with great potential applications in enzyme engineering and drug discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trying to explain to a robot what to do is a difficult undertaking, and only specific types of people have been able to do so far, such as programmers or operators who have learned how to use controllers to communicate with a robot. My internship's goal was to create and develop a framework that would make that easier. The system uses deep learning techniques to recognize a set of hand gestures, both static and dynamic. Then, based on the gesture, it sends a command to a robot. To be as generic as feasible, the communication is implemented using Robot Operating System (ROS). Furthermore, users can add new recognizable gestures and link them to new robot actions; a finite state automaton enforces the users' input verification and correct action sequence. Finally, the users can create and utilize a macro to describe a sequence of actions performable by a robot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emissions estimation, both during homologation and standard driving, is one of the new challenges that automotive industries have to face. The new European and American regulation will allow a lower and lower quantity of Carbon Monoxide emission and will require that all the vehicles have to be able to monitor their own pollutants production. Since numerical models are too computationally expensive and approximated, new solutions based on Machine Learning are replacing standard techniques. In this project we considered a real V12 Internal Combustion Engine to propose a novel approach pushing Random Forests to generate meaningful prediction also in extreme cases (extrapolation, very high frequency peaks, noisy instrumentation etc.). The present work proposes also a data preprocessing pipeline for strongly unbalanced datasets and a reinterpretation of the regression problem as a classification problem in a logarithmic quantized domain. Results have been evaluated for two different models representing a pure interpolation scenario (more standard) and an extrapolation scenario, to test the out of bounds robustness of the model. The employed metrics take into account different aspects which can affect the homologation procedure, so the final analysis will focus on combining all the specific performances together to obtain the overall conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combinatorial decision and optimization problems belong to numerous applications, such as logistics and scheduling, and can be solved with various approaches. Boolean Satisfiability and Constraint Programming solvers are some of the most used ones and their performance is significantly influenced by the model chosen to represent a given problem. This has led to the study of model reformulation methods, one of which is tabulation, that consists in rewriting the expression of a constraint in terms of a table constraint. To apply it, one should identify which constraints can help and which can hinder the solving process. So far this has been performed by hand, for example in MiniZinc, or automatically with manually designed heuristics, in Savile Row. Though, it has been shown that the performances of these heuristics differ across problems and solvers, in some cases helping and in others hindering the solving procedure. However, recent works in the field of combinatorial optimization have shown that Machine Learning (ML) can be increasingly useful in the model reformulation steps. This thesis aims to design a ML approach to identify the instances for which Savile Row’s heuristics should be activated. Additionally, it is possible that the heuristics miss some good tabulation opportunities, so we perform an exploratory analysis for the creation of a ML classifier able to predict whether or not a constraint should be tabulated. The results reached towards the first goal show that a random forest classifier leads to an increase in the performances of 4 different solvers. The experimental results in the second task show that a ML approach could improve the performance of a solver for some problem classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of product presentation in the marketing industry is well known. Labels are crucial for providing information to the buyer, but at a modest additional expense, a beautiful label with exquisite embellishments may also give the goods a sensation of high quality and elegance. Enhancing the capabilities of stamping machines is required to keep up with the increasing velocity of the production lines in the modern manufacturing industry and to offer new opportunities for customization. It’s in this context of improvements and refinements that this work takes place. The thesis was developed during an internship at Studio D, the firm that designs the mechanics of the machines produced by Cartes. The The aim of this work is to study possible upgrades for the existing hot stamping machines. The main focus of this work is centred on two objectives: first, evaluating the pressing forces generated by this machine and characterising how the mat used in the stamping process reacts to such forces. Second, propose a new conformation for the press mechanism in order to improve the rigidity and performance of the machines. The first objective is reached through a combined approach: the mat is crudely characterized with experimental data, while the frame of the machine is studied through FEM analysis. The results obtained are combined and used to upgrade a worksheet that allows to estimate the forces exerted by the machines. The second objective is reached with the proposal of new, improved designs for the main components of the machines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The comfort level of the seat has a major effect on the usage of a vehicle; thus, car manufacturers have been working on elevating car seat comfort as much as possible. However, still, the testing and evaluation of comfort are done using exhaustive trial and error testing and evaluation of data. In this thesis, we resort to machine learning and Artificial Neural Networks (ANN) to develop a fully automated approach. Even though this approach has its advantages in minimizing time and using a large set of data, it takes away the degree of freedom of the engineer on making decisions. The focus of this study is on filling the gap in a two-step comfort level evaluation which used pressure mapping with body regions to evaluate the average pressure supported by specific body parts and the Self-Assessment Exam (SAE) questions on evaluation of the person’s interest. This study has created a machine learning algorithm that works on giving a degree of freedom to the engineer in making a decision when mapping pressure values with body regions using ANN. The mapping is done with 92% accuracy and with the help of a Graphical User Interface (GUI) that facilitates the process during the testing time of comfort level evaluation of the car seat, which decreases the duration of the test analysis from days to hours.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Xanthomonas citri subsp. citri (X. citri) is the causative agent of the citrus canker, a disease that affects several citrus plants in Brazil and across the world. Although many studies have demonstrated the importance of genes for infection and pathogenesis in this bacterium, there are no data related to phosphate uptake and assimilation pathways. To identify the proteins that are involved in the phosphate response, we performed a proteomic analysis of X. citri extracts after growth in three culture media with different phosphate concentrations. Using mass spectrometry and bioinformatics analysis, we showed that X. citri conserved orthologous genes from Pho regulon in Escherichia coli, including the two-component system PhoR/PhoB, ATP binding cassette (ABC transporter) Pst for phosphate uptake, and the alkaline phosphatase PhoA. Analysis performed under phosphate starvation provided evidence of the relevance of the Pst system for phosphate uptake, as well as both periplasmic binding proteins, PhoX and PstS, which were formed in high abundance. The results from this study are the first evidence of the Pho regulon activation in X. citri and bring new insights for studies related to the bacterial metabolism and physiology. Biological significance Using proteomics and bioinformatics analysis we showed for the first time that the phytopathogenic bacterium X. citri conserves a set of proteins that belong to the Pho regulon, which are induced during phosphate starvation. The most relevant in terms of conservation and up-regulation were the periplasmic-binding proteins PstS and PhoX from the ABC transporter PstSBAC for phosphate, the two-component system composed by PhoR/PhoB and the alkaline phosphatase PhoA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the current study, a new approach has been developed for correcting the effect that moisture reduction after virgin olive oil (VOO) filtration exerts on the apparent increase of the secoiridoid content by using an internal standard during extraction. Firstly, two main Spanish varieties (Picual and Hojiblanca) were submitted to industrial filtration of VOOs. Afterwards, the moisture content was determined in unfiltered and filtered VOOs, and liquid-liquid extraction of phenolic compounds was performed using different internal standards. The resulting extracts were analyzed by HPLC-ESI-TOF/MS, in order to gain maximum information concerning the phenolic profiles of the samples under study. The reduction effect of filtration on the moisture content, phenolic alcohols, and flavones was confirmed at the industrial scale. Oleuropein was chosen as internal standard and, for the first time, the apparent increase of secoiridoids in filtered VOO was corrected, using a correction coefficient (Cc) calculated from the variation of internal standard area in filtered and unfiltered VOO during extraction. This approach gave the real concentration of secoiridoids in filtered VOO, and clarified the effect of the filtration step on the phenolic fraction. This finding is of great importance for future studies that seek to quantify phenolic compounds in VOOs.