917 resultados para Felicia Sartori


Relevância:

10.00% 10.00%

Publicador:

Resumo:

No município de São Sepé, localizado no Rio Grande do Sul, Brasil quase 80% da população residem na área urbana (IBGE, 2010). Torna-se importante, dessa forma, o reconhecimento do espaço urbano, cuja estrutura e conformação podem causar alterações no clima local, caracterizando o clima urbano. Foram elaborados documentos cartográficos que permitiram caracterizar o sítio urbano (aspectos geoecológicos), a estrutura e a função urbana (aspectos geourbanos). A partir da análise integrada das cartas geomorfológica, de declividade, orientação de vertentes, uso da terra e uso do solo urbano nota-se o maior adensamento de construções e impermeabilização do solo na área central da cidade (área comercial), o que diminui nas áreas de uso misto (comercial e residencial), residencial e industrial. Em alguns bairros residenciais, porém, nota-se a ausência de áreas verdes, o que pode prejudicar o conforto térmico da população.  Observa-se também a ocupação irregular e a ausência de mata ciliar ao longo dos canais fluviais que percorrem a área urbana. Impactos relacionados a urbanização já são notados, tais como as inundações. Nesse sentido devem ser tomadas medidas que atenuem os impactos atuais ao mesmo tempo em que estratégias de planejamento urbano devem ser priorizadas para que sejam evitados problemas futuros.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contarina è un’azienda che si occupa della gestione dei rifiuti in 49 Comuni della Provincia di Treviso. L’esperienza di tirocinio svoltasi presso tale realtà aziendale, ha avuto come obiettivo l’analisi della tracciabilità dei principali rifiuti trattati dagli impianti di Contarina, al fine di comprendere l’efficienza del sistema di gestione integrata del rifiuto urbano. Dopo un breve inquadramento normativo, si è proceduto a descrivere il modello di gestione messo in atto dall’azienda, analizzandone anche le modalità operative. Oltre al calcolo della percentuale di materiale raccolto in maniera differenziata, secondo differenti metodologie e all’analisi quantitativa e qualitativa dei rifiuti prodotti, per stimare l’effettivo quantitativo di materiale avviato a recupero, sono stati elaborati dei bilanci di massa per le seguenti filiere di rifiuto: vetro, plastica, metalli, carta e cartone, frazioni biodegradabili e indifferenziato. Tali categorie di rifiuto infatti, costituiscono la maggior parte dei rifiuti raccolti e trattati da Contarina. L’analisi dei dati ha rivelato una buona efficienza di captazione del sistema di raccolta differenziata ed anche un’elevata qualità merceologica del rifiuto gestito. In merito all’efficienza di trattamento e alla quantità del materiale avviato a recupero, alcune filiere si sono dimostrate più virtuose di altre, come ad esempio quella relativa al rifiuto cartaceo, dove il 99.3% del materiale trattato è diventato materia prima seconda. L’elaborato ha messo in luce inoltre come il quantitativo di materiale recuperabile venga influenzato, oltre che dalla qualità e tipologia di rifiuto trattato, anche dalla modalità di conferimento e dall’efficienza separativa degli impianti. Complessivamente, attraverso il trattamento delle principali categorie di rifiuto raccolte tramite il sistema porta a porta e gestite all’interno degli impianti Contarina, l’azienda trevigiana ha avviato a recupero, più dell’85% del materiale selezionato.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of the framework we are proposing is to help the physician obtain information about the patient's condition in order to reach the \emph{correct} diagnosis as soon as possible. In our proposal, the number of interactions between the physician and the patient is reduced to a strict minimum on the one hand and, on the other hand, it is made possible to increase the number of questions to be asked if the uncertainty about the diagnosis persists. These advantages are due to the fact that (i) we implement a reasoning component that allows us to predict a symptom from another symptom without explicitly asking the patient, (ii) we consider non-binary values for the weights associated with the symptoms, we introduce a dataset filtering process in order to choose which partition should be used with respect to some particular characteristics of the patient, and, in addition, (iv) it was added new functionality to the framework: the ability to detect further future risks of a patient already knowing his pathology. The experimental results we obtained are very encouraging

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pacchetto R per il supporto dell'analisi di dati spazio temporali. Il pacchetto fornisce due funzioni, le quali permettono di avviare due applicazioni web, sviluppate con il framework shiny, per la visualizzazione di dati con connotazione spaziale di tipo areale o puntuale. Le applicazioni generano, a partire dai dati caricati dall'utente, due grafici interattivi per la visualizzazione della distribuzione temporale e spaziale del fenomeno che i dati descrivono. Sono previsti, all'interno dell'interfaccia utente delle applicazioni, una serie di componenti che permettono di personalizzare i grafici prodotti.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cosmic voids are vast and underdense regions emerging between the elements of the cosmic web and dominating the large-scale structure of the Universe. Void number counts and density profiles have been demonstrated to provide powerful cosmological probes. Indeed, thanks to their low-density nature and they very large sizes, voids represent natural laboratories to test alternative dark energy scenarios, modifications of gravity and the presence of massive neutrinos. Despite the increasing use of cosmic voids in Cosmology, a commonly accepted definition for these objects has not yet been reached. For this reason, different void finding algorithms have been proposed during the years. Voids finder algorithms based on density or geometrical criteria are affected by intrinsic uncertainties. In recent years, new solutions have been explored to face these issues. The most interesting is based on the idea of identify void positions through the dynamics of the mass tracers, without performing any direct reconstruction of the density field. The goal of this Thesis is to provide a performing void finder algorithm based on dynamical criteria. The Back-in-time void finder (BitVF) we present use tracers as test particles and their orbits are reconstructed from their actual clustered configuration to an homogeneous and isotropic distribution, expected for the Universe early epoch. Once the displacement field is reconstructed, the density field is computed as its divergence. Consequently, void centres are identified as local minima of the field. In this Thesis work we applied the developed void finding algorithm to simulations. From the resulting void samples we computed different void statistics, comparing the results to those obtained with VIDE, the most popular void finder. BitVF proved to be able to produce a more reliable void samples than the VIDE ones. The BitVF algorithm will be a fundamental tool for precision cosmology, especially with upcoming galaxy-survey.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Negli ultimi anni, a causa degli enormi progressi dell’informatica e della sempre crescente quantità di dati generati, si è sentito sempre più il bisogno di trovare nuove tecniche, approcci e algoritmi per la ricerca dei dati. Infatti, la quantità di informazioni da memorizzare è diventata tale che ormai si sente sempre più spesso parlare di "Big Data". Questo nuovo scenario ha reso sempre più inefficaci gli approcci tradizionali alla ricerca di dati. Recentemente sono state quindi proposte nuove tecniche di ricerca, come ad esempio le ricerche Nearest Neighbor. In questo elaborato sono analizzate le prestazioni della ricerca di vicini in uno spazio vettoriale utilizzando come sistema di data storage Elasticsearch su un’infrastruttura cloud. In particolare, sono stati analizzati e messi a confronto i tempi di ricerca delle ricerche Nearest Neighbor esatte e approssimate, valutando anche la perdita di precisione nel caso di ricerche approssimate, utilizzando due diverse metriche di distanza: la similarità coseno e il prodotto scalare.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates if emotional states of users interacting with a virtual robot can be recognized reliably and if specific interaction strategy can change the users’ emotional state and affect users’ risk decision. For this investigation, the OpenFace [1] emotion recognition model was intended to be integrated into the Flobi [2] system, to allow the agent to be aware of the current emotional state of the user and to react appropriately. There was an open source ROS [3] bridge available online to integrate OpenFace to the Flobi simulation but it was not consistent with some other projects in Flobi distribution. Then due to technical reasons DeepFace was selected. In a human-agent interaction, the system is compared to a system without using emotion recognition. Evaluation could happen at different levels: evaluation of emotion recognition model, evaluation of the interaction strategy, and evaluation of effect of interaction on user decision. The results showed that the happy emotion induction was 58% and fear emotion induction 77% successful. Risk decision results show that: in happy induction after interaction 16.6% of participants switched to a lower risk decision and 75% of them did not change their decision and the remaining switched to a higher risk decision. In fear inducted participants 33.3% decreased risk 66.6 % did not change their decision The emotion recognition accuracy was and had bias to. The sensitivity and specificity is calculated for each emotion class. The emotion recognition model classifies happy emotions as neutral in most of the time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I dati sono una risorsa di valore inestimabile per tutte le organizzazioni. Queste informazioni vanno da una parte gestite tramite i classici sistemi operazionali, dall’altra parte analizzate per ottenere approfondimenti che possano guidare le scelte di business. Uno degli strumenti fondamentali a supporto delle scelte di business è il data warehouse. Questo elaborato è il frutto di un percorso di tirocinio svolto con l'azienda Injenia S.r.l. Il focus del percorso era rivolto all'ottimizzazione di un data warehouse che l'azienda vende come modulo aggiuntivo di un software di nome Interacta. Questo data warehouse, Interacta Analytics, ha espresso nel tempo notevoli criticità architetturali e di performance. L’architettura attualmente usata per la creazione e la gestione dei dati all'interno di Interacta Analytics utilizza un approccio batch, pertanto, l’obiettivo cardine dello studio è quello di trovare soluzioni alternative batch che garantiscano un risparmio sia in termini economici che di tempo, esplorando anche la possibilità di una transizione ad un’architettura streaming. Gli strumenti da utilizzare in questa ricerca dovevano inoltre mantenersi in linea con le tecnologie utilizzate per Interacta, ossia i servizi della Google Cloud Platform. Dopo una breve dissertazione sul background teorico di questa area tematica, l'elaborato si concentra sul funzionamento del software principale e sulla struttura logica del modulo di analisi. Infine, si espone il lavoro sperimentale, innanzitutto proponendo un'analisi delle criticità principali del sistema as-is, dopodiché ipotizzando e valutando quattro ipotesi migliorative batch e due streaming. Queste, come viene espresso nelle conclusioni della ricerca, migliorano di molto le performance del sistema di analisi in termini di tempistiche di elaborazione, di costo totale e di semplicità dell'architettura, in particolare grazie all'utilizzo dei servizi serverless con container e FaaS della piattaforma cloud di Google.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Day by day, machine learning is changing our lives in ways we could not have imagined just 5 years ago. ML expertise is more and more requested and needed, though just a limited number of ML engineers are available on the job market, and their knowledge is always limited by an inherent characteristic of theirs: they are humans. This thesis explores the possibilities offered by meta-learning, a new field in ML that takes learning a level higher: models are trained on other models' training data, starting from features of the dataset they were trained on, inference times, obtained performances, to try to understand the relationship between a good model and the way it was obtained. The so-called metamodel was trained on data collected by OpenML, the largest ML metadata platform that's publicly available today. Datasets were analyzed to obtain meta-features that describe them, which were then tied to model performances in a regression task. The obtained metamodel predicts the expected performances of a given model type (e.g., a random forest) on a given ML task (e.g., classification on the UCI census dataset). This research was then integrated into a custom-made AutoML framework, to show how meta-learning is not an end in itself, but it can be used to further progress our ML research. Encoding ML engineering expertise in a model allows better, faster, and more impactful ML applications across the whole world, while reducing the cost that is inevitably tied to human engineers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The following thesis aims to investigate the issues concerning the maintenance of a Machine Learning model over time, both about the versioning of the model itself and the data on which it is trained and about data monitoring tools and their distribution. The themes of Data Drift and Concept Drift were then explored and the performance of some of the most popular techniques in the field of Anomaly detection, such as VAE, PCA, and Monte Carlo Dropout, were evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A global italian pharmaceutical company has to provide two work environments that favor different needs. The environments will allow to develop solutions in a controlled, secure and at the same time in an independent manner on a state-of-the-art enterprise cloud platform. The need of developing two different environments is dictated by the needs of the working units. Indeed, the first environment is designed to facilitate the creation of application related to genomics, therefore, designed more for data-scientists. This environment is capable of consuming, producing, retrieving and incorporating data, furthermore, will support the most used programming languages for genomic applications (e.g., Python, R). The proposal was to obtain a pool of ready-togo Virtual Machines with different architectures to provide best performance based on the job that needs to be carried out. The second environment has more of a traditional trait, to obtain, via ETL (Extract-Transform-Load) process, a global datamodel, resembling a classical relational structure. It will provide major BI operations (e.g., analytics, performance measure, reports, etc.) that can be leveraged both for application analysis or for internal usage. Since, both architectures will maintain large amounts of data regarding not only pharmaceutical informations but also internal company informations, it would be possible to digest the data by reporting/ analytics tools and also apply data-mining, machine learning technologies to exploit intrinsic informations. The thesis work will introduce, proposals, implementations, descriptions of used technologies/platforms and future works of the above discussed environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The usage of Optical Character Recognition’s (OCR, systems is a widely spread technology into the world of Computer Vision and Machine Learning. It is a topic that interest many field, for example the automotive, where becomes a specialized task known as License Plate Recognition, useful for many application from the automation of toll road to intelligent payments. However, OCR systems need to be very accurate and generalizable in order to be able to extract the text of license plates under high variable conditions, from the type of camera used for acquisition to light changes. Such variables compromise the quality of digitalized real scenes causing the presence of noise and degradation of various type, which can be minimized with the application of modern approaches for image iper resolution and noise reduction. Oneclass of them is known as Generative Neural Networks, which are very strong ally for the solution of this popular problem.