20 resultados para Lifelong learning: one focus, different systems
Resumo:
In the last decades the automotive sector has seen a technological revolution, due mainly to the more restrictive regulation, the newly introduced technologies and, as last, to the poor resources of fossil fuels remaining on Earth. Promising solution in vehicles’ propulsion are represented by alternative architectures and energy sources, for example fuel-cells and pure electric vehicles. The automotive transition to new and green vehicles is passing through the development of hybrid vehicles, that usually combine positive aspects of each technology. To fully exploit the powerful of hybrid vehicles, however, it is important to manage the powertrain’s degrees of freedom in the smartest way possible, otherwise hybridization would be worthless. To this aim, this dissertation is focused on the development of energy management strategies and predictive control functions. Such algorithms have the goal of increasing the powertrain overall efficiency and contextually increasing the driver safety. Such control algorithms have been applied to an axle-split Plug-in Hybrid Electric Vehicle with a complex architecture that allows more than one driving modes, including the pure electric one. The different energy management strategies investigated are mainly three: the vehicle baseline heuristic controller, in the following mentioned as rule-based controller, a sub-optimal controller that can include also predictive functionalities, referred to as Equivalent Consumption Minimization Strategy, and a vehicle global optimum control technique, called Dynamic Programming, also including the high-voltage battery thermal management. During this project, different modelling approaches have been applied to the powertrain, including Hardware-in-the-loop, and diverse powertrain high-level controllers have been developed and implemented, increasing at each step their complexity. It has been proven the potential of using sophisticated powertrain control techniques, and that the gainable benefits in terms of fuel economy are largely influenced by the chose energy management strategy, even considering the powerful vehicle investigated.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
In the last decades a negative trend in inbreeding has accompanied the evident improvement in productivity and performance of bovine domestic population, predisposing to the occurrence of recessively inherited disorders. The objectives of this thesis were: a) the study of genetic diseases applying a “forward genetic approach” (FGA); b) the estimation of the prevalence of deleterious alleles responsible for eight recessive disorders in different breeds; c) the collection of well-characterized materials in a Biobank for Bovine Genetic Disorders. The FGA allowed the identification of seven new recessive deleterious variants (Paunch calf syndrome - KDM2B; Congenital cholesterol deficiency - APOB; Ichthyosis congenita - FA2H; Hypotrichosis - KRT71; Hypotrichosis - HEPHL1; Achromatopsia - CNGB3; Hemifacial microsomia – LAMB1) and of seven new de novo dominant deleterious variants (Achondrogenesis type II - two variants in COL2A1; Osteogenesis imperfecta - COL1A1; Skeletal-cardio-enteric dysplasia - MAP2K2; Congenital neuromuscular channelopathy - KGNG1; Epidermolysis bullosa simplex - KRT5; Classical Ehlers-Danlos syndrome - COL5A2) in different breeds, associated with a large spectrum of phenotypes affecting different systems. The FGA was based on the sequence of a clinical, genealogical, gross- and/or histopathological and genomic study. In particular, a WGS trio-approach (patient, dam and sire) was applied. The prevalence of deleterious alleles was calculated for the Pseudomyotonia congenita, Paunch calf syndrome, Hemifacial microsomia, Congenital bilateral cataract, Ichthyosis congenita, Ichthyosis fetalis, Achromatopsia and Hypotrichosis. A particular concern resulted the allelic frequency of 12% for the Paunch calf syndrome in Romagnola cattle. In respect to the Biobank for Bovine Genetic Diseases, biological materials of clinical cases and their available relatives as well as controls used for the allelic frequency estimations were stored at -20 °C. Altogether, around 16.000 samples were added to the biobank.
Resumo:
La ricerca, di carattere esplorativo, prende spunto dal dibattito internazionale, sviluppatosi sul finire dello scorso secolo, sulla necessità di innovare il sistema educativo, in ottica di lifelong-learning, e favorire l’acquisizione delle competenze richieste nel XXI secolo. Le diverse indicazioni sollecitano una scuola intesa come Civic-center in grado di riconoscere gli apprendimenti extra-scolastici, con spazi di apprendimento innovativi funzionali a didattiche learner-centred. A circa trent’anni dalla Dichiarazione di Salamanca riteniamo necessario interrogarsi se queste innovazioni garantiscano l’inclusione e il successo formativo di tutti. La ricerca si articola in quattro studi di caso relativi a due scuole secondarie di secondo grado innovative italiane e due finlandesi. Si propone di comprendere sulla base delle percezioni di studenti, insegnanti, dirigenti se tale modello di scuola favorisca anche l’inclusione e il benessere di tutti gli studenti. Dall’analisi dei risultati sembra che, secondo le percezioni di coloro che hanno partecipato alla ricerca, le scuole siano riuscite a far coesistere innovazione e inclusione. In particolare, l’utilizzo di spazi di apprendimento innovativi e didattiche learner-centred all’interno di una scuola aperta al territorio in grado di riconoscere le competenze extra-scolastiche, sembrano favorire effettivamente l’inclusione di tutti gli studenti. Nonostante gli aspetti innovativi, restano tuttavia presenti all’interno delle scuole analizzate ancora diverse criticità che non consentono una piena inclusione for all
Resumo:
The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.