10 resultados para VHDL (Computer hardware description language)

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La maggior parte dei moderni dispositivi e macchinari, sia ad uso civile che industriale, utilizzano sistemi elettronici che ne supervisionano e ne controllano il funzionamento. All’ interno di questi apparati è quasi certamente impiegato un sistema di controllo digitale che svolge, anche grazie alle potenzialità oggi raggiunte, compiti che fino a non troppi anni or sono erano dominio dell’ elettronica analogica, si pensi ad esempio ai DSP (Digital Signal Processor) oggi impiegati nei sistemi di telecomunicazione. Nonostante l'elevata potenza di calcolo raggiunta dagli odierni microprocessori/microcontrollori/DSP dedicati alle applicazioni embedded, quando è necessario eseguire elaborazioni complesse, time-critical, dovendo razionalizzare e ottimizzare le risorse a disposizione, come ad esempio spazio consumo e costi, la scelta ricade inevitabilmente sui dispositivi FPGA. I dispositivi FPGA, acronimo di Field Programmable Gate Array, sono circuiti integrati a larga scala d’integrazione (VLSI, Very Large Scale of Integration) che possono essere configurati via software dopo la produzione. Si differenziano dai microprocessori poiché essi non eseguono un software, scritto ad esempio in linguaggio assembly oppure in linguaggio C. Sono invece dotati di risorse hardware generiche e configurabili (denominate Configurable Logic Block oppure Logic Array Block, a seconda del produttore del dispositivo) che per mezzo di un opportuno linguaggio, detto di descrizione hardware (HDL, Hardware Description Language) vengono interconnesse in modo da costituire circuiti logici digitali. In questo modo, è possibile far assumere a questi dispositivi funzionalità logiche qualsiasi, non previste in origine dal progettista del circuito integrato ma realizzabili grazie alle strutture programmabili in esso presenti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’acceleratore di particelle LHC, al CERN di Ginevra, permette studi molto rilevanti nell'ambito della fisica subnucleare. L’importanza che ricopre in questo campo il rivelatore è grandissima ed è per questo che si utilizzano tecnologie d’avanguardia nella sua costruzione. É altresì fondamentale disporre di un sistema di acquisizione dati quanto più moderno ma sopratutto efficiente. Tale sistema infatti è necessario per gestire tutti i segnali elettrici che derivano dalla conversione dell’evento fisico, passaggio necessario per rendere misurabili e quantificabili le grandezze di interesse. In particolare in questa tesi viene seguito il lavoro di test delle schede ROD dell’esperimento ATLAS IBL, che mira a verificare la loro corretta funzionalità, prima che vengano spedite nei laboratori del CERN. Queste nuove schede gestiscono i segnali in arrivo dal Pixel Detector di ATLAS, per poi inviarli ai computer per la successiva elaborazione. Un sistema simile era già implementato e funzionante, ma il degrado dei chip ha causato una perdita di prestazioni, che ha reso necessario l’inserimento di un layer aggiuntivo. Il nuovo strato di rivelatori a pixel, denominato Insertable Barrel Layer (IBL), porta così un aggiornamento tecnologico e prestazionale all'interno del Pixel Detector di ATLAS, andando a ristabilire l’efficacia del sistema.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this dissertation is to show the power of contrastive analysis in successfully predicting the errors a language learner will make by means of a concrete case study. First, there is a description of what language transfer is and why it is important in the matter of second language acquisition. Second, a brief explanation of the history and development of contrastive analysis will be offered. Third, the focus of the thesis will move to an analysis of errors usually made by language learners. To conclude, the dissertation will focus on the concrete case study of a Russian learner of English: after an analysis of the errors the student is likely to make, a recorded conversation will be examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Un'interfaccia cervello-computer (BCI: Brain-Computer Interface) è un sistema di comunicazione diretto tra il cervello e un dispositivo esterno che non dipende dalle normali vie di output del cervello, costituite da nervi o muscoli periferici. Il segnale generato dall'utente viene acquisito per mezzo di appositi sensori, poi viene processato e classificato estraendone così le informazioni di interesse che verranno poi utilizzate per produrre un output reinviato all'utente come feedback. La tecnologia BCI trova interessanti applicazioni nel campo biomedico dove può essere di grande aiuto a persone soggette da paralisi, ma non sono da escludere altri utilizzi. Questa tesi in particolare si concentra sulle componenti hardware di una interfaccia cervello-computer analizzando i pregi e i difetti delle varie possibilità: in particolar modo sulla scelta dell'apparecchiatura per il rilevamento della attività cerebrale e dei meccanismi con cui gli utilizzatori della BCI possono interagire con l'ambiente circostante (i cosiddetti attuatori). Le scelte saranno effettuate tenendo in considerazione le necessità degli utilizzatori in modo da ridurre i costi e i rischi aumentando il numero di utenti che potranno effettivamente beneficiare dell'uso di una interfaccia cervello-computer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In questa tesi viene seguito il lavoro di test delle schede ROD del layer 2 del Pixel Detector dell’ esperimento ATLAS, che mira a verificare la loro corretta funzionalità, prima che vengano spedite nei laboratori del CERN. Queste nuove schede gestiscono i segnali in arrivo dal Pixel Detector di ATLAS, per poi inviarli ai computer per la successiva elaborazione. Le schede ROD andranno a sostituire le precedenti schede SiROD nella catena di acquisizione dati dell’esperimento, procedendo dal nuovo strato IBL, e proseguendo con i tre layer del Pixel Detector, corroborando l’aggiornamento tecnologico e prestazionale necessario in vista dell’incremento di luminosità dell’esperimento.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to develop a prototype of an e-learning environment that can foster Content and Language Integrated Learning (CLIL) for students enrolled in an aircraft maintenance training program, which allows them to obtain a license valid in all EU member states. Background research is conducted to retrace the evolution of the field of educational technology, analyzing different learning theories – behaviorism, cognitivism, and (socio-)constructivism – and reflecting on how technology and its use in educational contexts has changed over time. Particular attention is given to technologies that have been used and proved effective in Computer Assisted Language Learning (CALL). Based on the background research and on students’ learning objectives, i.e. learning highly specialized contents and aeronautical technical English, a bilingual approach is chosen, three main tools are identified – a hypertextbook, an exercise creation activity, and a discussion forum – and the learning management system Moodle is chosen as delivery medium. The hypertextbook is based on the technical textbook written in English students already use. In order to foster text comprehension, the hypertextbook is enriched by hyperlinks and tooltips. Hyperlinks redirect students to webpages containing additional information both in English and in Italian, while tooltips show Italian equivalents of English technical terms. The exercise creation activity and the discussion forum foster interaction and collaboration among students, according to socio-constructivist principles. In the exercise creation activity, students collaboratively create a workbook, which allow them to deeply analyze and master the contents of the hypertextbook and at the same time create a learning tool that can help them, as well as future students, to enhance learning. In the discussion forum students can discuss their individual issues, content-related, English-related or e-learning environment-related, helping one other and offering instructors suggestions on how to improve both the hypertextbook and the workbook based on their needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this study is to provide a description of the phenomenon defined as Child Language Brokering (CLB), a common practice among language minority communities but which has received less attention in the academic literature. As the children of immigrants often learn the host language much more quickly than their parents, they contribute to family life by acting as language and cultural mediators between a family members and different language speakers. Many immigrant families prefer a language broker from within their own family to an external mediator or interpreter, even though there is a well-found resistance to the use of these young interpreters by professionals. In this study I report some findings from surveys of teachers in schools in Ravenna where there has been some use of students as CLBs and of students who have acted or are still acting as mediators for their families in different contexts, not only while at school. This dissertation is divided into five chapters. Chapter one aims at providing an overview of recent migration to Italy and of the differences between first-generation immigrants and second-generation immigrants. The chapter also discusses the available professional interpreting facilities provided by the municipality of Ravenna. Chapter two presents an overview of the literature on child language brokering. Chapter three provides a description of the methodology used in order to analyze the data collected. Chapter four contains a detailed analysis of the questionnaires administered to the students and the interviews submitted to the teachers in four schools in Ravenna. Chapter five focuses on the studies carried out by the researchers of the Thomas Coram Research Unit and University College London and draws a general comparison between their findings from on-line surveys of teachers in schools and my own findings on teachers’ points of view. The results of this study demonstrate that CLB is a common practice among immigrant children living in Ravenna and, although almost all students reported positive appreciation, further work is still needed to assess the impact of this phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The digital revolution has affected all aspects of human life, and interpreting is no exception. This study will provide an overview of the technology tools available to the interpreter, but it will focus more on simultaneous interpretation, particularly on the “simultaneous interpretation with text” method. The decision to analyse this particular method arose after a two-day experience at the Court of Justice of the European Union (CJEU), during research for my previous Master’s dissertation. During those days, I noticed that interpreters were using "simultaneous interpretation with text" on a daily basis. Owing to the efforts and processes this method entails, this dissertation will aim at discovering whether technology can help interpreters, and if so, how. The first part of the study will describe the “simultaneous with text” approach, and how it is used at the CJEU; the data provided by a survey for professional interpreters will describe its use in other interpreting situations. The study will then describe Computer-Assisted Language Learning technologies (CALL) and technologies for interpreters. The second part of the study will focus on the interpreting booth, which represents the first application of the technology in the interpreting field, as well as on the technologies that can be used inside the booth: programs, tablets and apps. The dissertation will then analyse the programs which might best help the interpreter in "simultaneous with text" mode, before providing some proposals for further software upgrades. In order to give a practical description of the possible upgrades, the domain of “judicial cooperation in criminal matters” will be taken as an example. Finally, after a brief overview of other applications of technology in the interpreting field (i.e. videoconferencing, remote interpreting), the conclusions will summarize the results provided by the study and offer some final reflections on the teaching of interpreting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artificial Intelligence is reshaping the field of fashion industry in different ways. E-commerce retailers exploit their data through AI to enhance their search engines, make outfit suggestions and forecast the success of a specific fashion product. However, it is a challenging endeavour as the data they possess is huge, complex and multi-modal. The most common way to search for fashion products online is by matching keywords with phrases in the product's description which are often cluttered, inadequate and differ across collections and sellers. A customer may also browse an online store's taxonomy, although this is time-consuming and doesn't guarantee relevant items. With the advent of Deep Learning architectures, particularly Vision-Language models, ad-hoc solutions have been proposed to model both the product image and description to solve this problems. However, the suggested solutions do not exploit effectively the semantic or syntactic information of these modalities, and the unique qualities and relations of clothing items. In this work of thesis, a novel approach is proposed to address this issues, which aims to model and process images and text descriptions as graphs in order to exploit the relations inside and between each modality and employs specific techniques to extract syntactic and semantic information. The results obtained show promising performances on different tasks when compared to the present state-of-the-art deep learning architectures.