871 resultados para Learning to read


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nonostante lo scetticismo di molti studiosi circa la possibilità di prevedere l'andamento della borsa valori, esistono svariate teorie ipotizzanti la possibilità di utilizzare le informazioni conosciute per predirne i movimenti futuri. L’avvento dell’intelligenza artificiale nella seconda parte dello scorso secolo ha permesso di ottenere risultati rivoluzionari in svariati ambiti, tanto che oggi tale disciplina trova ampio impiego nella nostra vita quotidiana in molteplici forme. In particolare, grazie al machine learning, è stato possibile sviluppare sistemi intelligenti che apprendono grazie ai dati, riuscendo a modellare problemi complessi. Visto il successo di questi sistemi, essi sono stati applicati anche all’arduo compito di predire la borsa valori, dapprima utilizzando i dati storici finanziari della borsa come fonte di conoscenza, e poi, con la messa a punto di tecniche di elaborazione del linguaggio naturale umano (NLP), anche utilizzando dati in linguaggio naturale, come il testo di notizie finanziarie o l’opinione degli investitori. Questo elaborato ha l’obiettivo di fornire una panoramica sull’utilizzo delle tecniche di machine learning nel campo della predizione del mercato azionario, partendo dalle tecniche più elementari per arrivare ai complessi modelli neurali che oggi rappresentano lo stato dell’arte. Vengono inoltre formalizzati il funzionamento e le tecniche che si utilizzano per addestrare e valutare i modelli di machine learning, per poi effettuare un esperimento in cui a partire da dati finanziari e soprattutto testuali si tenterà di predire correttamente la variazione del valore dell’indice di borsa S&P 500 utilizzando un language model basato su una rete neurale.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes a study conducted for the development of a new approach for the design of compliant mechanisms. Currently compliant mechanisms are based on a 2.5D design method. The applications for which compliant mechanisms can be used this way, is limited. The proposed research suggests to use a 3D approach for the design of CM’s, to better exploit its useful properties. To test the viability of this method, a practical application was chosen. The selected application is related to morphing wings. During this project a working prototype of a variable sweep and variable AoA system was designed and made for an SUAV. A compliant hinge allows the system to achieve two DOF. This hinge has been designed using the proposed 3D design approach. To validate the capabilities of the design, two methods were used. One of these methods was by simulation. By using analysis software, a basic idea could be provided of the stress and deformation of the designed mechanism. The second validation was done by means of AM. Using FDM and material jetting technologies, several prototypes were manufactured. The result of the first model showed that the DOF could be achieved. Models manufactured using material jetting technology, proved that the designed model could provide the desired motion and exploit the positive characteristics of CM. The system could be manufactured successfully in one part. Being able to produce the system in one part makes the need for an extensive assembly process redundant. This improves its structural quality. The materials chosen for the prototypes were PLA, VeroGray and Rigur. The material properties were suboptimal for its final purpose, but successful results were obtained. The prototypes proved tough and were able to provide the desired motion. This proves that the proposed design method can be a useful tool for the design of improved CM’s. Furthermore, the variable sweep & AoA system could be used to boost the flight performance of SUAV’s.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current climate crisis requires a comprehensive understanding of biodiversity to acknowledge how ecosystems’ responses to anthropogenic disturbances may result in feedback that can either mitigate or exacerbate global warming. Although ecosystems are dynamic and macroecological patterns change drastically in response to disturbance, dynamic macroecology has received insufficient attention and theoretical formalisation. In this context, the maximum entropy principle (MaxEnt) could provide an effective inference procedure to study ecosystems. Since the improper usage of entropy outside its scope often leads to misconceptions, the opening chapter will clarify its meaning by following its evolution from classical thermodynamics to information theory. The second chapter introduces the study of ecosystems from a physicist’s viewpoint. In particular, the MaxEnt Theory of Ecology (METE) will be the cornerstone of the discussion. METE predicts the shapes of macroecological metrics in relatively static ecosystems using constraints imposed by static state variables. However, in disturbed ecosystems with macroscale state variables that change rapidly over time, its predictions tend to fail. In the final chapter, DynaMETE is therefore presented as an extension of METE from static to dynamic. By predicting how macroecological patterns are likely to change in response to perturbations, DynaMETE can contribute to a better understanding of disturbed ecosystems’ fate and the improvement of conservation and management of carbon sinks, like forests. Targeted strategies in ecosystem management are now indispensable to enhance the interdependence of human well-being and the health of ecosystems, thus avoiding climate change tipping points.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this dissertation is to present the sequence of events which brought the scientific community of the early 20th century to conceive an expanding Universe born from a single origin. Among the facts here reported, some are well-known, some others instead are little-known backstories, not so easy neither to obtain nor to trust. Indeed, several matters shown in this thesis, now as then, create a battleground among scientists. Amid the numerous personalities whose contributions are discussed in this work, the main protagonist is surely Georges Lemaître, who managed to combine – without overlapping – his being both a priest and a scientist. The first chapter is dedicated to his biography, from his childhood in Belgium, to his early adulthood between England and the USA, to his success in the scientific community. The second and the third chapter explain how the race to the understanding of a Universe which not only expands, but also originated from a singularity, developed. The Belgian priest’s discoveries, as shown, were challenged by other important scientists, who, in several cases, Lemaître had a friendly relationship with. As a consequence, the fourth and final chapter deals with the multiple relations that the priest managed to build, thanks to his politeness and kindness. Moreover, it is also covered Lemaître’s personal connection with the Church and religion, without forgetting the personalities that influenced him – above all, Saint Thomas Aquinas. As a conclusion to this thesis, two appendices gather not only a summary of Lemaître’s works which are not already described in the chapters, but also the biographies of all the characters presented in this dissertation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Il Deep Learning ha radicalmente trasformato il mondo del Machine Learning migliorando lo stato dell'arte in diversi campi che spaziano dalla computer vision al natural language processing. Non fermandosi a problemi di classificazione, negli ultimi anni, applicazioni di tipo generativo hanno portato alla creazione di immagini realistiche e documenti letterali. Il mondo della musica non è esente da una moltitudine di esperimenti nello stesso campo, con risultati ancora acerbi ma comunque potenzialmente interessanti. In questa tesi verrà discussa l'applicazione di un di modello appartenente alla famiglia del Deep Learning per la generazione di musica simbolica.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis, we perform a next-to-leading order calculation of the impact of primordial magnetic fields (PMF) into the evolution of scalar cosmological perturbations and the cosmic microwave background (CMB) anisotropy. Magnetic fields are everywhere in the Universe at all scales probed so far, but their origin is still under debate. The current standard picture is that they originate from the amplification of initial seed fields, which could have been generated as PMFs in the early Universe. The most robust way to test their presence and constrain their features is to study how they impact on key cosmological observables, in particular the CMB anisotropies. The standard way to model a PMF is to consider its contribution (quadratic in the magnetic field) at the same footing of first order perturbations, under the assumptions of ideal magneto-hydrodynamics and compensated initial conditions. In the perspectives of ever increasing precision of CMB anisotropies measurements and of possible uncounted non-linear effects, in this thesis we study effects which go beyond the standard assumptions. We study the impact of PMFs on cosmological perturbations and CMB anisotropies with adiabatic initial conditions, the effect of Alfvén waves on the speed of sound of perturbations and possible non-linear behavior of baryon overdensity for PMFs with a blue spectral index, by modifying and improving the publicly available Einstein-Boltzmann code SONG, which has been written in order to take into account all second-order contributions in cosmological perturbation theory. One of the objectives of this thesis is to set the basis to verify by an independent fully numerical analysis the possibility to affect recombination and the Hubble constant.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Oggigiorno il concetto di informazione è diventato cruciale in fisica, pertanto, siccome la migliore teoria che abbiamo per compiere predizioni riguardo l'universo è la meccanica quantistica, assume una particolare importanza lo sviluppo di una versione quantistica della teoria dell'informazione. Questa centralità è confermata dal fatto che i buchi neri hanno entropia. Per questo motivo, in questo lavoro sono presentati elementi di teoria dell'informazione quantistica e della comunicazione quantistica e alcuni sono illustrati riferendosi a modelli quantistici altamente idealizzati della meccanica di buco nero. In particolare, nel primo capitolo sono forniti tutti gli strumenti quanto-meccanici per la teoria dell'informazione e della comunicazione quantistica. Successivamente, viene affrontata la teoria dell'informazione quantistica e viene trovato il limite di Bekenstein alla quantità di informazione chiudibile entro una qualunque regione spaziale. Tale questione viene trattata utilizzando un modello quantistico idealizzato della meccanica di buco nero supportato dalla termodinamica. Nell'ultimo capitolo, viene esaminato il problema di trovare un tasso raggiungibile per la comunicazione quantistica facendo nuovamente uso di un modello quantistico idealizzato di un buco nero, al fine di illustrare elementi della teoria. Infine, un breve sommario della fisica dei buchi neri è fornito in appendice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the Massive IoT vision, millions of devices need to be connected to the Internet through a wireless access technology. However, current IoT-focused standards are not fully prepared for this future. In this thesis, a novel approach to Non-Orthogonal techniques for Random Access, which is the main bottleneck in high density systems, is proposed. First, the most popular wireless access standards are presented, with a focus on Narrowband-IoT. Then, the Random Access procedure as implemented in NB-IoT is analyzed. The Non-Orthogonal Random Access technique is presented next, along with two potential algorithms for the detection of non-orthogonal preambles. Finally, the performance of the proposed solutions are obtained through numerical simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Currently making digital 3D models and replicas of the cultural heritage assets play an important role in the preservation and having a high detail source for future research and intervention. In this dissertation, it is tried to assess different methods for digital surveying and making 3D replicas of cultural heritage assets in different scales of size. The methodologies vary in devices, software, workflow, and the amount of skill that is required. The three phases of the 3D modelling process are data acquisition, modelling, and model presentation. Each of these sections is divided into sub-sections and there are several approaches, methods, devices, and software that may be employed, furthermore, the selection process should be based on the operation's goal, available facilities, the scale and properties of the object or structure to be modeled, as well as the operators' expertise and experience. The most key point to remember is that the 3D modelling operation should be properly accurate, precise, and reliable; therefore, there are so many instructions and pieces of advice on how to perform 3D modelling effectively. It is an attempt to compare and evaluate the various ways of each phase in order to explain and demonstrate their differences, benefits, and drawbacks in order to serve as a simple guide for new and/or inexperienced users.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Negli ultimi anni, grazie al progresso tecnologico e ad una sempre maggior disponibilità di grosse moli di dati, l'Intelligenza Artificiale è letteralmente esplosa, arrivando ad abbracciare diversi domini, tra cui quello delle Neuroscienze. In questa tesi si illustra quindi quale sia l'attuale ruolo che l'Intelligenza Artificiale (IA) assume nelle Neuroscienze. La tesi inizia con un capitolo introduttivo volto a fornire delle nozioni di base relative all'IA, utili per comprendere al meglio i successivi capitoli. Nel secondo capitolo vengono forniti degli esempi sugli ambiti di utilizzo dell'IA nelle Neuroscienze: in ogni esempio vi è una parte introduttiva, volta a descrivere il contesto, seguita da una parte in cui, riportando uno o più recenti articoli scientifici, si descrivono i benefici che quell'ambito può trarre dall'utilizzo dell'IA. Infine, nel terzo capitolo, seguendo lo stesso schema degli esempi del secondo capitolo, si approfondisce il ruolo dell'IA nella Salute Mentale, concentrandosi principalmente sull'aspetto patologico e, più precisamente, sui vantaggi che l'IA può apportare nella prevenzione, diagnosi e trattamento del Disturbo Depressivo Maggiore (DDM) e dei Disturbo dello Spettro Autistico (DSA).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Le recenti analisi dei dati raccolti ad ALICE dimostrano che la nostra comprensione dei fenomeni di adronizzazione dei sapori pesanti è ancora incompleta, perché le misure effettuate su collisioni pp, p-Pb e Pb-Pb non sono riproducibili da modelli teorici basati su altre tipologie di collisione come e+e−. In particolare, i risultati sembrano indicare che il principio di universalità, che assume che le funzioni di frammentazione di quark e gluoni siano indipendenti dal tipo di sistema interagente, non sia valido. Per questo motivo sono stati sviluppati nuovi modelli teorici e fenomenologici, capaci di riprodurre in modo più o meno accurato i dati sperimentali. Questi modelli differiscono tra di loro soprattutto a bassi valori di impulso trasverso pT . L’analisi dati a basso pT si rivela dunque di fondamentale importanza, in quanto permette di discriminare, tra i vari modelli, quelli che sono realmente in grado di riprodurre i dati sperimentali e quelli che non lo sono. Inoltre può fornire una conferma sperimentale dei fenomeni fisici su cui tale modello si basa. In questa tesi è stato estratto il numero di barioni Λ+c (yield ) prodotto in collisioni pp a √s = 13 TeV , nel range di impulso trasverso 0 < pT (Λ+c ) < 1 GeV/c. É stato fatto uso di una tecnica di machine learning che sfrutta un algoritmo di tipo Boosted Decision Trees (BDT) implementato dal pacchetto TMVA, al fine di identificare ed eliminare una grossa parte del fondo statistico e semplificare notevolmente l’analisi vera e propria. Il grado di attendibilità della misura è stata verificata eseguendo l’estrazione dello yield con due approcci diversi: il primo, modellando il fondo combinatoriale con una funzione analitica; successivamente con la creazione di un template statistico creato ad hoc con la tecnica delle track rotations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this thesis is the study of the normal phase of a mass imbalanced and polarized ultra-cold Fermi gas in the context of the BCS-BEC crossover, using a diagrammatic approach known as t-matrix approximation. More specifically, the calculations are implemented using the fully self-consistent t-matrix (or Luttinger- Ward) approach, which is already experimentally and numerically validated for the balanced case. An imbalance (polarization) between the two spin populations works against pairing and superfluidity. For sufficiently large polarization (and not too strong attraction) the system remains in the normal phase even at zero temperature. This phase is expected to be well described by the Landau’s Fermi liquid theory. By reducing the spin polarization, a critical imbalance is reached where a quantum phase transition towards a superfluid phase occurs and the Fermi liquid description breaks down. Depending on the strength of the interaction, the exotic superfluid phase at the quantum critical point (QCP) can be either a FFLO phase (Fulde-Ferrell-Larkin-Ovchinnikov) or a Sarma phase. In this regard, the presence of mass imbalance can strongly influence the nature of the QCP, by favouring one of these two exotic types of pairing over the other, depending on whether the majority of the two species is heavier or lighter than the minority. The analysis of the system is made by focusing on the temperature-coupling-polarization phase diagram for different mass ratios of the two components and on the study of different thermodynamic quantities at finite temperature. The evolution towards a non-Fermi liquid behavior at the QCP is investigated by calculating the fermionic quasi-particle residues, the effective masses and the self-energies at zero temperature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The amplitude of motor evoked potentials (MEPs) elicited by transcranial magnetic stimulation (TMS) of the primary motor cortex (M1) shows a large variability from trial to trial, although MEPs are evoked by the same repeated stimulus. A multitude of factors is believed to influence MEP amplitudes, such as cortical, spinal and motor excitability state. The goal of this work is to explore to which degree the variation in MEP amplitudes can be explained by the cortical state right before the stimulation. Specifically, we analyzed a dataset acquired on eleven healthy subjects comprising, for each subject, 840 single TMS pulses applied to the left M1 during acquisition of electroencephalography (EEG) and electromyography (EMG). An interpretable convolutional neural network, named SincEEGNet, was utilized to discriminate between low- and high-corticospinal excitability trials, defined according to the MEP amplitude, using in input the pre-TMS EEG. This data-driven approach enabled considering multiple brain locations and frequency bands without any a priori selection. Post-hoc interpretation techniques were adopted to enhance interpretation by identifying the more relevant EEG features for the classification. Results show that individualized classifiers successfully discriminated between low and high M1 excitability states in all participants. Outcomes of the interpretation methods suggest the importance of the electrodes situated over the TMS stimulation site, as well as the relevance of the temporal samples of the input EEG closer to the stimulation time. This novel decoding method allows causal investigation of the cortical excitability state, which may be relevant for personalizing and increasing the efficacy of therapeutic brain-state dependent brain stimulation (for example in patients affected by Parkinson’s disease).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mathematical models and the involved methods applied to real contexts are essential tools for designing and evaluating solutions concerning physical elements and/or organizational components of transportation systems. To deal with this, the systems engineering approach is used, which considers the relationships among the transportation system elements and their performances. This approach allows quantifying the effects of transportation projects by taking into account the intrinsic complexity of the transportation system and then assessing the effects of solutions to solve – or mitigate – transportation problems. This thesis focuses on the application of the transport system engineering approach to a real city – Bologna, in northern Italy – in order to: 1. simulate the current transportation system conditions (status quo); 2. compare and assess the results obtained by two different approaches for simulating the link traffic flows on the road transportation network and their related impacts (externalities) 3. identify potential solutions to solve critical aspects, particularly in terms of traffic flow congestion and related environmental impacts (findings)