996 resultados para deep processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on sensory processing or the way animals see, hear, smell, taste, feel and electrically and magnetically sense their environment has advanced a great deal over the last fifteen years. This book discusses the most important themes that have emerged from recent research and provides a summary of likely future directions. The book starts with two sections on the detection of sensory signals over long and short ranges by aquatic animals, covering the topics of navigation, communication, and finding food and other localized sources. The next section, the co-evolution of signal and sense, deals with how animals decide whether the source is prey, predator or mate by utilizing receptors that have evolved to take full advantage of the acoustical properties of the signal. Organisms living in the deep-sea environment have also received a lot of recent attention, so the next section deals with visual adaptations to limited light environments where sunlight is replaced by bioluminescence and the visual system has undergone changes to optimize light capture and sensitivity. The last section on central co-ordination of sensory systems covers how signals are processed and filtered for use by the animal. This book will be essential reading for all researchers and graduate students interested in sensory systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major problems associated with communication via a loudspeaking telephone (LST) is that, using analogue processing, duplex transmission is limited to low-loss lines and produces a low acoustic output. An architectural for an instrument has been developed and tested, which uses digital signal processing to provide duplex transmission between a LST and a telopnone handset over most of the B.T. network. Digital adaptive-filters are used in the duplex LST to cancel coupling between the loudspeaker and microphone, and across the transmit to receive paths of the 2-to-4-wire converter. Normal movement of a person in the acoustic path causes a loss of stability by increasing the level of coupling from the loudspeaker to the microphone, since there is a lag associated the adaptive filters learning about a non-stationary path, Control of the loop stability and the level of sidetone heard by the hadset user is by a microprocessoe, which continually monitors the system and regulates the gain. The result is a system which offers the best compromise available based on a set of measured parameters.A theory has been developed which gives the loop stability requirements based on the error between the parameters of the filter and those of the unknown path. The programme to develope a low-cost adaptive filter in LST produced a low-cost adaptive filter in LST produced a unique architecture which has a number of features not available in any similar system. These include automatic compensation for the rate of adaptation over a 36 dB range of output level, , 4 rates of adaptation (with a maximum of 465 dB/s), plus the ability to cascade up to 4 filters without loss o performance. A complex story has been developed to determine the adptation which can be achieved using finite-precision arithmatic. This enabled the development of an architecture which distributed the normalisation required to achieve optimum rate of adaptation over the useful input range. Comparison of theory and measurement for the adaptive filter show very close agreement. A single experimental LST was built and tested on connections to hanset telephones over the BT network. The LST demonstrated that duplex transmission was feasible using signal processing and produced a more comfortable means of communication beween people than methods emplying deep voice-switching to regulate the local-loop gain. Although, with the current level of processing power, it is not a panacea and attention must be directed toward the physical acoustic isolation between loudspeaker and microphone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : Une définition opérationnelle de la dyslexie qui est adéquate et pertinente à l'éducation n'a pu être identifiée suite à une recension des écrits. Les études sur la dyslexie se retrouvent principalement dans trois champs: la neurologie, la neurolinguistique et la génétique. Les résultats de ces recherches cependant, se limitent au domaine médical et ont peu d'utilité pour une enseignante ou un enseignant. La classification de la dyslexie de surface et la dyslexie profonde est la plus appropriée lorsque la dyslexie est définie comme trouble de lecture dans le contexte de l'éducation. L'objectif de ce mémoire était de développer un cadre conceptuel théorique dans lequel les troubles de lecture chez les enfants dyslexiques sont dû à une difficulté en résolution de problèmes dans le traitement de l'information. La validation du cadre conceptuel a été exécutée à l'aide d'un expert en psychologie cognitive, un expert en dyslexie et une enseignante. La perspective de la résolution de problèmes provient du traitement de l'information en psychologie cognitive. Le cadre conceptuel s'adresse uniquement aux troubles de lectures qui sont manifestés par les enfants dyslexiques.||Abstract : An extensive literature review failed to uncover an adequate operational definition of dyslexia applicable to education. The predominant fields of research that have produced most of the studies on dyslexia are neurology, neurolinguistics and genetics. Their perspectives were shown to be more pertinent to medical experts than to teachers. The categorization of surface and deep dyslexia was shown to be the best description of dyslexia in an educational context. The purpose of the present thesis was to develop a theoretical conceptual framework which describes a link between dyslexia, a text-processing model and problem solving. This conceptual framework was validated by three experts specializing in a specific field (either cognitive psychology, dyslexia or teaching). The concept of problem solving was based on information-processing theories in cognitive psychology. This framework applies specifically to reading difficulties which are manifested by dyslexic children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dopo lo sviluppo dei primi casi di Covid-19 in Cina nell’autunno del 2019, ad inizio 2020 l’intero pianeta è precipitato in una pandemia globale che ha stravolto le nostre vite con conseguenze che non si vivevano dall’influenza spagnola. La grandissima quantità di paper scientifici in continua pubblicazione sul coronavirus e virus ad esso affini ha portato alla creazione di un unico dataset dinamico chiamato CORD19 e distribuito gratuitamente. Poter reperire informazioni utili in questa mole di dati ha ulteriormente acceso i riflettori sugli information retrieval systems, capaci di recuperare in maniera rapida ed efficace informazioni preziose rispetto a una domanda dell'utente detta query. Di particolare rilievo è stata la TREC-COVID Challenge, competizione per lo sviluppo di un sistema di IR addestrato e testato sul dataset CORD19. Il problema principale è dato dal fatto che la grande mole di documenti è totalmente non etichettata e risulta dunque impossibile addestrare modelli di reti neurali direttamente su di essi. Per aggirare il problema abbiamo messo a punto nuove soluzioni self-supervised, a cui abbiamo applicato lo stato dell'arte del deep metric learning e dell'NLP. Il deep metric learning, che sta avendo un enorme successo soprattuto nella computer vision, addestra il modello ad "avvicinare" tra loro immagini simili e "allontanare" immagini differenti. Dato che sia le immagini che il testo vengono rappresentati attraverso vettori di numeri reali (embeddings) si possano utilizzare le stesse tecniche per "avvicinare" tra loro elementi testuali pertinenti (e.g. una query e un paragrafo) e "allontanare" elementi non pertinenti. Abbiamo dunque addestrato un modello SciBERT con varie loss, che ad oggi rappresentano lo stato dell'arte del deep metric learning, in maniera completamente self-supervised direttamente e unicamente sul dataset CORD19, valutandolo poi sul set formale TREC-COVID attraverso un sistema di IR e ottenendo risultati interessanti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decades, Artificial Intelligence has witnessed multiple breakthroughs in deep learning. In particular, purely data-driven approaches have opened to a wide variety of successful applications due to the large availability of data. Nonetheless, the integration of prior knowledge is still required to compensate for specific issues like lack of generalization from limited data, fairness, robustness, and biases. In this thesis, we analyze the methodology of integrating knowledge into deep learning models in the field of Natural Language Processing (NLP). We start by remarking on the importance of knowledge integration. We highlight the possible shortcomings of these approaches and investigate the implications of integrating unstructured textual knowledge. We introduce Unstructured Knowledge Integration (UKI) as the process of integrating unstructured knowledge into machine learning models. We discuss UKI in the field of NLP, where knowledge is represented in a natural language format. We identify UKI as a complex process comprised of multiple sub-processes, different knowledge types, and knowledge integration properties to guarantee. We remark on the challenges of integrating unstructured textual knowledge and bridge connections with well-known research areas in NLP. We provide a unified vision of structured knowledge extraction (KE) and UKI by identifying KE as a sub-process of UKI. We investigate some challenging scenarios where structured knowledge is not a feasible prior assumption and formulate each task from the point of view of UKI. We adopt simple yet effective neural architectures and discuss the challenges of such an approach. Finally, we identify KE as a form of symbolic representation. From this perspective, we remark on the need of defining sophisticated UKI processes to verify the validity of knowledge integration. To this end, we foresee frameworks capable of combining symbolic and sub-symbolic representations for learning as a solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neural representations (NR) have emerged in the last few years as a powerful tool to represent signals from several domains, such as images, 3D shapes, or audio. Indeed, deep neural networks have been shown capable of approximating continuous functions that describe a given signal with theoretical infinite resolution. This finding allows obtaining representations whose memory footprint is fixed and decoupled from the resolution at which the underlying signal can be sampled, something that is not possible with traditional discrete representations, e.g., grids of pixels for images or voxels for 3D shapes. During the last two years, many techniques have been proposed to improve the capability of NR to approximate high-frequency details and to make the optimization procedures required to obtain NR less demanding both in terms of time and data requirements, motivating many researchers to deploy NR as the main form of data representation for complex pipelines. Following this line of research, we first show that NR can approximate precisely Unsigned Distance Functions, providing an effective way to represent garments that feature open 3D surfaces and unknown topology. Then, we present a pipeline to obtain in a few minutes a compact Neural Twin® for a given object, by exploiting the recent advances in modeling neural radiance fields. Furthermore, we move a step in the direction of adopting NR as a standalone representation, by considering the possibility of performing downstream tasks by processing directly the NR weights. We first show that deep neural networks can be compressed into compact latent codes. Then, we show how this technique can be exploited to perform deep learning on implicit neural representations (INR) of 3D shapes, by only looking at the weights of the networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this dissertation is to describe the methodologies required to design, operate, and validate the performance of ground stations dedicated to near and deep space tracking, as well as the models developed to process the signals acquired, from raw data to the output parameters of the orbit determination of spacecraft. This work is framed in the context of lunar and planetary exploration missions by addressing the challenges in receiving and processing radiometric data for radio science investigations and navigation purposes. These challenges include the designing of an appropriate back-end to read, convert and store the antenna voltages, the definition of appropriate methodologies for pre-processing, calibration, and estimation of radiometric data for the extraction of information on the spacecraft state, and the definition and integration of accurate models of the spacecraft dynamics to evaluate the goodness of the recorded signals. Additionally, the experimental design of acquisition strategies to perform direct comparison between ground stations is described and discussed. In particular, the evaluation of the differential performance between stations requires the designing of a dedicated tracking campaign to maximize the overlap of the recorded datasets at the receivers, making it possible to correlate the received signals and isolate the contribution of the ground segment to the noise in the single link. Finally, in support of the methodologies and models presented, results from the validation and design work performed on the Deep Space Network (DSN) affiliated nodes DSS-69 and DSS-17 will also be reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I Phase-Locked Loops sono circuiti ancora oggi utilizzati per la generazione di segnali coerenti in frequenza e in fase con i segnali in ingresso, motivo per cui sono uno degli strumenti della radio scienza per la ricostruzione dei segnali scambiati con le sonde e nascosti dal rumore accumulato nel tragitto che separa le sonde stesse dalle stazioni di tracking a terra. Questa tesi illustra l'implementazione di un PLL digitale linearizzato in Matlab e Simulink in una nuova veste rispetto al modello implementato durante l'attività di tirocinio curricolare, al fine di migliorarne le prestazioni per bassi carrier-to-noise density ratios. Il capitolo 1 si compone di due parti: la prima introduce all'ambito nel quale si inserisce il lavoro proposto, ossia la determinazione d'orbita; la seconda illustra i fondamenti della teoria dei segnali. Il capitolo 2 è incentrato sull'analisi dei Phase-Locked Loops, partendo da un'introduzione teorica e approdando all'implementazione di un modello in Simulink. Il capitolo 3, infine, mostra i risultati dell'applicazione del modello implementato in Simulink nell'analisi dei segnali di una missione realmente svolta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grazie all’evoluzione degli strumenti di calcolo e delle strutture digitali, le intelligenze artificiali si sono evolute considerevolmente negli ultimi anni, permettendone sempre nuove e complesse applicazioni. L’interesse del presente progetto di tesi è quello di creare un modello di studio preliminare di intelligenza artificiale definita come Rete Neurale Convoluzionale, o Convolutional Neural Network (CNN), al fine di essere impiegata nel campo della radioscienza e dell’esplorazione planetaria. In particolare, uno degli interessi principali di applicazione del modello è negli studi di geodesia compiuti tramite determinazione orbitale di satelliti artificiali nel loro moto attorno ai corpi celesti. Le accelerazioni causate dai campi gravitazionali planetari perturbano le orbite dei satelliti artificiali, queste variazioni vengono captate dai ricevitori radio a terra sottoforma di shift Doppler della frequenza del segnale, a partire dalla quale è quindi possibile determinare informazioni dettagliate sul campo di gravità e sulla struttura interna del corpo celeste in esame. Per poter fare ciò, occorre riuscire a determinare l’esatta frequenza del segnale in arrivo, il quale, per via di perdite e disturbi durante il suo tragitto, presenterà sempre una componente di rumore. Il metodo più comune per scindere la componente di informazione da quella di rumore e ricavarne la frequenza effettiva è l’applicazione di trasformate di Fourier a tempo breve, o Short-time Fourier Transform (STFT). Con l’attività sperimentale proposta, ci si è quindi posto l’obiettivo di istruire un CNN alla stima della frequenza di segnali reali sinusoidali rumorosi per avere un modello computazionalmente rapido e affidabile a supporto delle operazioni di pre-processing per missioni di radio-scienza.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to evaluate fat substitute in processing of sausages prepared with surimi of waste from piramutaba filleting. The formulation ingredients were mixed with the fat substitutes added according to a fractional planning 2(4-1), where the independent variables, manioc starch (Ms), hydrogenated soy fat (F), texturized soybean protein (Tsp) and carrageenan (Cg) were evaluated on the responses of pH, texture (Tx), raw batter stability (RBS) and water holding capacity (WHC) of the sausage. Fat substitutes were evaluated in 11 formulations and the results showed that the greatest effects on the responses were found to Ms, F and Cg, being eliminated from the formulation Tsp. To find the best formulation for processing piramutaba sausage was made a complete factorial planning of 2(3) to evaluate the concentrations of fat substitutes in an enlarged range. The optimum condition found for fat substitutes in the sausages formulation were carrageenan (0.51%), manioc starch (1.45%) and fat (1.2%).