632 resultados para Decoding
Resumo:
Literature sample that the basic literacy with literacy differentiates the development of the citizens forming them to act in critical way in society. This research had as objective to analyze nuances of the basic literacy process and literacy of a pupil who little if express verbally, in the classroom, having the discursivos sorts as entered. Case study is about a research of qualitative interpretativa boarding of the type. The data had been collected by the following instruments: systematic comment, interviews with three professors of the citizen, daily of field and written production of the citizen. The results of the research point, on the other hand, with respect to the education of the writing and initial reading that if restricts to the basic literacy concept as technique of the writing, codification and decoding, without advancing for the basic literacy as apprehension and understanding of meanings. On the other hand, they point that the initial writing is organized to reach only the independent literacy, while the ideological literacy is distant of the pertaining to school universe, in the initial years of Basic Education
Resumo:
Background: Thyroid hormones (THs) are known to regulate protein synthesis by acting at the transcriptional level and inducing the expression of many genes. However, little is known about their role in protein expression at the post-transcriptional level, even though studies have shown enhancement of protein synthesis associated with mTOR/p70S6K activation after triiodo-l-thyronine (T3) administration. On the other hand, the effects of TH on translation initiation and polypeptidic chain elongation factors, being essential for activating protein synthesis, have been poorly explored. Therefore, considering that preliminary studies from our laboratory have demonstrated an increase in insulin content in INS-1E cells in response to T3 treatment, the aim of the present study was to investigate if proteins of translational nature might be involved in this effect. Methods: INS-1E cells were maintained in the presence or absence of T3 (10(-6) or 10(-8) M) for 12 hours. Thereafter, insulin concentration in the culture medium was determined by radioimmunoassay, and the cells were processed for Western blot detection of insulin, eukaryotic initiation factor 2 (eIF2), p-eIF2, eIF5A, EF1A, eIF4E binding protein (4E-BP), p-4E-BP, p70S6K, and p-p70S6K. Results: It was found that, in parallel with increased insulin generation, T3 induced p70S6K phosphorylation and the expression of the translational factors eIF2, eIF5A, and eukaryotic elongation factor 1 alpha (eEF1A). In contrast, total and phosphorylated 4E-BP, as well as total p70S6K and p-eIF2 content, remained unchanged after T3 treatment. Conclusions: Considering that (i) p70S6K induces S6 phosphorylation of the 40S ribosomal subunit, an essential condition for protein synthesis; (ii) eIF2 is essential for the initiation of messenger RNA translation process; and (iii) eIF5A and eEF1A play a central role in the elongation of the polypeptidic chain during the transcripts decoding, the data presented here lead us to suppose that a part of T3-induced insulin expression in INS-1E cells depends on the protein synthesis activation at the post-transcriptional level, as these proteins of the translational machinery were shown to be regulated by T3.
Resumo:
In recent years, due to the rapid convergence of multimedia services, Internet and wireless communications, there has been a growing trend of heterogeneity (in terms of channel bandwidths, mobility levels of terminals, end-user quality-of-service (QoS) requirements) for emerging integrated wired/wireless networks. Moreover, in nowadays systems, a multitude of users coexists within the same network, each of them with his own QoS requirement and bandwidth availability. In this framework, embedded source coding allowing partial decoding at various resolution is an appealing technique for multimedia transmissions. This dissertation includes my PhD research, mainly devoted to the study of embedded multimedia bitstreams in heterogenous networks, developed at the University of Bologna, advised by Prof. O. Andrisano and Prof. A. Conti, and at the University of California, San Diego (UCSD), where I spent eighteen months as a visiting scholar, advised by Prof. L. B. Milstein and Prof. P. C. Cosman. In order to improve the multimedia transmission quality over wireless channels, joint source and channel coding optimization is investigated in a 2D time-frequency resource block for an OFDM system. We show that knowing the order of diversity in time and/or frequency domain can assist image (video) coding in selecting optimal channel code rates (source and channel code rates). Then, adaptive modulation techniques, aimed at maximizing the spectral efficiency, are investigated as another possible solution for improving multimedia transmissions. For both slow and fast adaptive modulations, the effects of imperfect channel estimation errors are evaluated, showing that the fast technique, optimal in ideal systems, might be outperformed by the slow adaptive modulation, when a real test case is considered. Finally, the effects of co-channel interference and approximated bit error probability (BEP) are evaluated in adaptive modulation techniques, providing new decision regions concepts, and showing how the widely used BEP approximations lead to a substantial loss in the overall performance.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
I moderni sistemi embedded sono equipaggiati con risorse hardware che consentono l’esecuzione di applicazioni molto complesse come il decoding audio e video. La progettazione di simili sistemi deve soddisfare due esigenze opposte. Da un lato è necessario fornire un elevato potenziale computazionale, dall’altro bisogna rispettare dei vincoli stringenti riguardo il consumo di energia. Uno dei trend più diffusi per rispondere a queste esigenze opposte è quello di integrare su uno stesso chip un numero elevato di processori caratterizzati da un design semplificato e da bassi consumi. Tuttavia, per sfruttare effettivamente il potenziale computazionale offerto da una batteria di processoriè necessario rivisitare pesantemente le metodologie di sviluppo delle applicazioni. Con l’avvento dei sistemi multi-processore su singolo chip (MPSoC) il parallel programming si è diffuso largamente anche in ambito embedded. Tuttavia, i progressi nel campo della programmazione parallela non hanno mantenuto il passo con la capacità di integrare hardware parallelo su un singolo chip. Oltre all’introduzione di multipli processori, la necessità di ridurre i consumi degli MPSoC comporta altre soluzioni architetturali che hanno l’effetto diretto di complicare lo sviluppo delle applicazioni. Il design del sottosistema di memoria, in particolare, è un problema critico. Integrare sul chip dei banchi di memoria consente dei tempi d’accesso molto brevi e dei consumi molto contenuti. Sfortunatamente, la quantità di memoria on-chip che può essere integrata in un MPSoC è molto limitata. Per questo motivo è necessario aggiungere dei banchi di memoria off-chip, che hanno una capacità molto maggiore, come maggiori sono i consumi e i tempi d’accesso. La maggior parte degli MPSoC attualmente in commercio destina una parte del budget di area all’implementazione di memorie cache e/o scratchpad. Le scratchpad (SPM) sono spesso preferite alle cache nei sistemi MPSoC embedded, per motivi di maggiore predicibilità, minore occupazione d’area e – soprattutto – minori consumi. Per contro, mentre l’uso delle cache è completamente trasparente al programmatore, le SPM devono essere esplicitamente gestite dall’applicazione. Esporre l’organizzazione della gerarchia di memoria ll’applicazione consente di sfruttarne in maniera efficiente i vantaggi (ridotti tempi d’accesso e consumi). Per contro, per ottenere questi benefici è necessario scrivere le applicazioni in maniera tale che i dati vengano partizionati e allocati sulle varie memorie in maniera opportuna. L’onere di questo compito complesso ricade ovviamente sul programmatore. Questo scenario descrive bene l’esigenza di modelli di programmazione e strumenti di supporto che semplifichino lo sviluppo di applicazioni parallele. In questa tesi viene presentato un framework per lo sviluppo di software per MPSoC embedded basato su OpenMP. OpenMP è uno standard di fatto per la programmazione di multiprocessori con memoria shared, caratterizzato da un semplice approccio alla parallelizzazione tramite annotazioni (direttive per il compilatore). La sua interfaccia di programmazione consente di esprimere in maniera naturale e molto efficiente il parallelismo a livello di loop, molto diffuso tra le applicazioni embedded di tipo signal processing e multimedia. OpenMP costituisce un ottimo punto di partenza per la definizione di un modello di programmazione per MPSoC, soprattutto per la sua semplicità d’uso. D’altra parte, per sfruttare in maniera efficiente il potenziale computazionale di un MPSoC è necessario rivisitare profondamente l’implementazione del supporto OpenMP sia nel compilatore che nell’ambiente di supporto a runtime. Tutti i costrutti per gestire il parallelismo, la suddivisione del lavoro e la sincronizzazione inter-processore comportano un costo in termini di overhead che deve essere minimizzato per non comprometterre i vantaggi della parallelizzazione. Questo può essere ottenuto soltanto tramite una accurata analisi delle caratteristiche hardware e l’individuazione dei potenziali colli di bottiglia nell’architettura. Una implementazione del task management, della sincronizzazione a barriera e della condivisione dei dati che sfrutti efficientemente le risorse hardware consente di ottenere elevate performance e scalabilità. La condivisione dei dati, nel modello OpenMP, merita particolare attenzione. In un modello a memoria condivisa le strutture dati (array, matrici) accedute dal programma sono fisicamente allocate su una unica risorsa di memoria raggiungibile da tutti i processori. Al crescere del numero di processori in un sistema, l’accesso concorrente ad una singola risorsa di memoria costituisce un evidente collo di bottiglia. Per alleviare la pressione sulle memorie e sul sistema di connessione vengono da noi studiate e proposte delle tecniche di partizionamento delle strutture dati. Queste tecniche richiedono che una singola entità di tipo array venga trattata nel programma come l’insieme di tanti sotto-array, ciascuno dei quali può essere fisicamente allocato su una risorsa di memoria differente. Dal punto di vista del programma, indirizzare un array partizionato richiede che ad ogni accesso vengano eseguite delle istruzioni per ri-calcolare l’indirizzo fisico di destinazione. Questo è chiaramente un compito lungo, complesso e soggetto ad errori. Per questo motivo, le nostre tecniche di partizionamento sono state integrate nella l’interfaccia di programmazione di OpenMP, che è stata significativamente estesa. Specificamente, delle nuove direttive e clausole consentono al programmatore di annotare i dati di tipo array che si vuole partizionare e allocare in maniera distribuita sulla gerarchia di memoria. Sono stati inoltre sviluppati degli strumenti di supporto che consentono di raccogliere informazioni di profiling sul pattern di accesso agli array. Queste informazioni vengono sfruttate dal nostro compilatore per allocare le partizioni sulle varie risorse di memoria rispettando una relazione di affinità tra il task e i dati. Più precisamente, i passi di allocazione nel nostro compilatore assegnano una determinata partizione alla memoria scratchpad locale al processore che ospita il task che effettua il numero maggiore di accessi alla stessa.
Resumo:
This thesis adresses the problem of localization, and analyzes its crucial aspects, within the context of cooperative WSNs. The three main issues discussed in the following are: network synchronization, position estimate and tracking. Time synchronization is a fundamental requirement for every network. In this context, a new approach based on the estimation theory is proposed to evaluate the ultimate performance limit in network time synchronization. In particular the lower bound on the variance of the average synchronization error in a fully connected network is derived by taking into account the statistical characterization of the Message Delivering Time (MDT) . Sensor network localization algorithms estimate the locations of sensors with initially unknown location information by using knowledge of the absolute positions of a few sensors and inter-sensor measurements such as distance and bearing measurements. Concerning this issue, i.e. the position estimate problem, two main contributions are given. The first is a new Semidefinite Programming (SDP) framework to analyze and solve the problem of flip-ambiguity that afflicts range-based network localization algorithms with incomplete ranging information. The occurrence of flip-ambiguous nodes and errors due to flip ambiguity is studied, then with this information a new SDP formulation of the localization problem is built. Finally a flip-ambiguity-robust network localization algorithm is derived and its performance is studied by Monte-Carlo simulations. The second contribution in the field of position estimate is about multihop networks. A multihop network is a network with a low degree of connectivity, in which couples of given any nodes, in order to communicate, they have to rely on one or more intermediate nodes (hops). Two new distance-based source localization algorithms, highly robust to distance overestimates, typically present in multihop networks, are presented and studied. The last point of this thesis discuss a new low-complexity tracking algorithm, inspired by the Fano’s sequential decoding algorithm for the position tracking of a user in a WLAN-based indoor localization system.
Resumo:
This thesis regards the Wireless Sensor Network (WSN), as one of the most important technologies for the twenty-first century and the implementation of different packet correcting erasure codes to cope with the ”bursty” nature of the transmission channel and the possibility of packet losses during the transmission. The limited battery capacity of each sensor node makes the minimization of the power consumption one of the primary concerns in WSN. Considering also the fact that in each sensor node the communication is considerably more expensive than computation, this motivates the core idea to invest computation within the network whenever possible to safe on communication costs. The goal of the research was to evaluate a parameter, for example the Packet Erasure Ratio (PER), that permit to verify the functionality and the behavior of the created network, validate the theoretical expectations and evaluate the convenience of introducing the recovery packet techniques using different types of packet erasure codes in different types of networks. Thus, considering all the constrains of energy consumption in WSN, the topic of this thesis is to try to minimize it by introducing encoding/decoding algorithms in the transmission chain in order to prevent the retransmission of the erased packets through the Packet Erasure Channel and save the energy used for each retransmitted packet. In this way it is possible extend the lifetime of entire network.
Resumo:
Im Rahmen der vorliegenden Dissertation wurden Untersuchungen zur Expression und Funktion der respiratorischen Proteine Neuroglobin (Ngb) und Cytoglobin (Cygb) in Vertebraten durchgeführt. Beide Globine wurden erst kürzlich entdeckt, und ihre Funktionen konnten trotz vorliegender Daten zur Struktur und biochemischen Eigenschaften dieser Proteine bisher nicht eindeutig geklärt werden. Im ersten Abschnitt der vorliegenden Arbeit wurde die zelluläre und subzelluläre Lokalisation von Neuroglobin und Cytoglobin in murinen Gewebeschnitten untersucht. Die Expression von Ngb in neuronalen und endokrinen Geweben hängt offensichtlich mit den hohen metabolischen Aktivitäten dieser Organe zusammen. Insbesondere im Gehirn konnten regionale Unterschiede in der Ngb-Expression beobachtet werden. Dabei korrelierte eine besonders starke Neuroglobin-Expression mit Gehirnbereichen, die bekanntermaßen die höchsten Grundaktivitäten aufweisen. In Anbetracht dessen liegt die Funktion des Neuroglobins möglicherweise im basalen O2-Metabolismus dieser Gewebe, wobei Ngb als O2-Lieferant und kurzfristiger O2-Speicher den vergleichsweise hohen Sauerstoffbedarf vor Ort sicherstellen könnte. Weitere Funktionen in der Entgiftung von ROS bzw. RNS oder die kürzlich publizierte mögliche Rolle des Ngb bei der Verhinderung der Mitochondrien-vermittelten Apoptose durch eine Reduktion des freigesetzten Cytochrom c wären darüber hinaus denkbar. Die Cygb-Expression im Gehirn beschränkte sich auf relativ wenige Neurone in verschiedenen Gehirnbereichen und zeigte dort vorwiegend eine Co-Lokalisation mit der neuronalen NO-Synthase. Dieser Befund legt eine Funktion des Cytoglobins im NO-Metabolismus nahe. Quantitative RT-PCR-Experimente zur mRNA-Expression von Ngb und Cygb in alternden Säugern am Bsp. der Hamsterspezies Phodopus sungorus zeigten keine signifikanten Änderungen der mRNA-Mengen beider Globine in alten im Vergleich zu jungen Tieren. Dies widerspricht publizierten Daten, in denen bei der Maus anhand von Western Blot-Analysen eine Abnahme der Neuroglobin-Menge im Alter gezeigt wurde. Möglicherweise handelt es sich hierbei um speziesspezifische Differenzen. Die im Rahmen dieser Arbeit durchgeführte vergleichende Sequenzanalyse der humanen und murinen NGB/Ngb-Genregion liefert zum einen Hinweise auf die mögliche Regulation der Ngb-Expression und zum anderen eine wichtige Grundlage für die funktionellen Analysen dieses Gens. Es konnte ein minimaler Promotorbereich definiert werden, der zusammen mit einigen konservierten regulatorischen Elementen als Basis für experimentelle Untersuchungen der Promotoraktivität in Abhängigkeit von äußeren Einflüssen dienen wird. Bioinformatische Analysen führten zur Identifizierung des sog. „neuron restrictive silencer element“ (NRSE) im Ngb-Promotor, welches vermutlich für die vorwiegend neuronale Expression des Proteins verantwortlich ist. Die kontrovers diskutierte O2-abhängige Regulation der Ngb-Expression konnte hingegen anhand der durchgeführten komparativen Sequenzanalysen nicht bestätigt werden. Es wurden keine zwischen Mensch und Maus konservierten Bindestellen für den Transkriptionsfaktor HIF-1 identifiziert, der die Expression zahlreicher hypoxieregulierter Gene, z.B. Epo und VEGF, vermittelt. Zusammen mit den in vivo-Daten spricht dies eher gegen eine Regulation der Ngb-Expression bei verminderter Verfügbarkeit von Sauerstoff. Die Komplexität der Funktionen von Ngb und Cygb im O2-Stoffwechsel der Vertebraten macht den Einsatz muriner Modellsysteme unerlässlich, die eine sukzessive Aufklärung der Funktionen beider Proteine erlauben. Die vorliegende Arbeit liefert auch dazu einen wichtigen Beitrag. Die hergestellten „gene-targeting“-Vektorkonstrukte liefern in Verbindung mit den etablierten Nachweisverfahren zur Genotypisierung von embryonalen Stammzellen die Grundlage zur erfolgreichen Generierung von Ngb-knock out sowie Ngb- und Cygb-überexprimierenden transgenen Tieren. Diese werden für die endgültige Entschlüsselung funktionell relevanter Fragestellungen von enormer Bedeutung sein.
Resumo:
Questa Tesi aspira a mostrare un codice a livello di pacchetto, che abbia performance molto vicine a quello ottimo, per progetti di comunicazioni Satellitari. L’altro scopo di questa Tesi è quello di capire se rimane ancora molto più difficile maneggiare direttamente gli errori piuttosto che le erasures. Le applicazioni per comunicazioni satellitari ora come ora usano tutte packet erasure coding per codificare e decodificare l’informazione. La struttura dell’erasure decoding è molto semplice, perché abbiamo solamente bisogno di un Cyclic Redundancy Check (CRC) per realizzarla. Il problema nasce quando abbiamo pacchetti di dimensioni medie o piccole (per esempio più piccole di 100 bits) perché in queste situazioni il costo del CRC risulta essere troppo dispendioso. La soluzione la possiamo trovare utilizzando il Vector Symbol Decoding (VSD) per raggiungere le stesse performance degli erasure codes, ma senza la necessità di usare il CRC. Per prima cosa viene fatta una breve introduzione su come è nata e su come si è evoluta la codifica a livello di pacchetto. In seguito è stato introdotto il canale q-ary Symmetric Channel (qSC), con sia la derivazione della sua capacità che quella del suo Random Coding Bound (RCB). VSD è stato poi proposto con la speranza di superare in prestazioni il Verification Based Decoding (VBD) su il canale qSC. Infine, le effettive performance del VSD sono state stimate via simulazioni numeriche. I possibili miglioramenti delle performance, per quanto riguarda il VBD sono state discusse, come anche le possibili applicazioni future. Inoltre abbiamo anche risposto alla domande se è ancora così tanto più difficile maneggiare gli errori piuttosto che le erasure.
Resumo:
Im Laufe der Evolution müssen Sauerstoff-metabolisierende Organismen eine Reihe von Anpassungen entwickelt haben, um in der zytotoxischen oxidativen Umgebung der sauerstoff-haltigen Erdatmosphäre überleben zu können. Die im Rahmen dieser Arbeit durchgeführten vergleichenden Analysen mitochondrial kodierter und kern-kodierter Proteome mehrerer hundert Spezies haben ergeben, dass die Evolution eines alternativen genetischen Codes in Mitochondrien eine moderne Adaptation in diesem Sinne war. Viele aerobe Tiere und Pilze dekodieren in Abweichung vom genetischen Standard-Code das Codon AUA als Methionin. In der vorliegenden Arbeit wird gezeigt, dass diese Spezies dadurch eine massive Akkumulation der sehr leicht oxidierbaren Aminosäure Methionin in ihren Atmungskettenkomplexen erreichen, die generell ein bevorzugtes Ziel reaktiver Sauerstoffspezies sind. Der gewonnene Befund lässt sich widerspruchsfrei nur unter Annahme einer antioxidativen Wirkung dieser Aminosäure erklären, wie sie erstmals 1996 von R. Levine anhand von Oxidationsmessungen in Modellproteinen postuliert worden war. In der vorliegenden Arbeit wird diese Hypothese nun direkt mittels neuartiger Modellsubstanzen in lebenden Zellen bestätigt. Die durchgeführten bioinformatischen Analysen und zellbiologischen Experimente belegen, dass kollektive Proteinveränderungen die Triebkraft für die Evolution abweichender genetischer Codes sein können.rnDie Bedeutung von oxidativem Stress wurde darüber hinaus auch im Referenzrahmen einer akuten oxidativen Schädigung im Einzelorganismus untersucht. Da oxidativer Stress in der Pathogenese altersassoziierter neurodegenerativer Erkrankungen wie der Alzheimerschen Krankheit prominent involviert zu sein scheint, wurden die Auswirkungungen von Umwelt-induziertem oxidativem Stress auf den histopathologischen Verlauf in einem transgenen Modell der Alzheimerschen Krankheit in vivo untersucht. Dabei wurden transgene Mäuse des Modells APP23 im Rahmen von Fütterungsversuchen einer lebenslangen Defizienz der Antioxidantien Selen oder Vitamin E ausgesetzt. Während die Selenoproteinexpression durch die selendefiziente Diät gewebespezifisch reduziert wurde, ergaben sich keine Anzeichen eines beschleunigten Auftretens pathologischer Marker wie amyloider Plaques oder Neurodegeneration. Es war vielmehr ein unerwarteter Trend hinsichtlich einer geringeren Plaquebelastung in Vitamin E-defizienten Alzheimermäusen zu erkennen. Auch wenn diese Daten aufgrund einer geringen Versuchstiergruppengröße nur mit Vorsicht interpretiert werden dürfen, so scheint doch ein Mangel an essentiellen antioxidativen Nährstoffen die Progression in einem anerkannten Alzheimermodell nicht negativ zu beeinflussen.rn
Resumo:
The monitoring of cognitive functions aims at gaining information about the current cognitive state of the user by decoding brain signals. In recent years, this approach allowed to acquire valuable information about the cognitive aspects regarding the interaction of humans with external world. From this consideration, researchers started to consider passive application of brain–computer interface (BCI) in order to provide a novel input modality for technical systems solely based on brain activity. The objective of this thesis is to demonstrate how the passive Brain Computer Interfaces (BCIs) applications can be used to assess the mental states of the users, in order to improve the human machine interaction. Two main studies has been proposed. The first one allows to investigate whatever the Event Related Potentials (ERPs) morphological variations can be used to predict the users’ mental states (e.g. attentional resources, mental workload) during different reactive BCI tasks (e.g. P300-based BCIs), and if these information can predict the subjects’ performance in performing the tasks. In the second study, a passive BCI system able to online estimate the mental workload of the user by relying on the combination of the EEG and the ECG biosignals has been proposed. The latter study has been performed by simulating an operative scenario, in which the occurrence of errors or lack of performance could have significant consequences. The results showed that the proposed system is able to estimate online the mental workload of the subjects discriminating three different difficulty level of the tasks ensuring a high reliability.
Resumo:
Random access (RA) protocols are normally used in a satellite networks for initial terminal access and are particularly effective since no coordination is required. On the other hand, contention resolution diversity slotted Aloha (CRDSA), irregular repetition slotted Aloha (IRSA) and coded slotted Aloha (CSA) has shown to be more efficient than classic RA schemes as slotted Aloha, and can be exploited also when short packets transmissions are done over a shared medium. In particular, they relies on burst repetition and on successive interference cancellation (SIC) applied at the receiver. The SIC process can be well described using a bipartite graph representation and exploiting tools used for analyze iterative decoding. The scope of my Master Thesis has been to described the performance of such RA protocols when the Rayleigh fading is taken into account. In this context, each user has the ability to correctly decode a packet also in presence of collision and when SIC is considered this may result in multi-packet reception. Analysis of the SIC procedure under Rayleigh fading has been analytically derived for the asymptotic case (infinite frame length), helping the analysis of both throughput and packet loss rates. An upper bound of the achievable performance has been analytically obtained. It can be show that in particular channel conditions the throughput of the system can be greater than one packets per slot which is the theoretical limit of the Collision Channel case.
Resumo:
This study is based on a former student’s work, aimed at examining the influence of handedness on conference interpreting. In simultaneous interpreting (IS) both cerebral hemispheres participate in the decoding of the incoming message and in the activation of the motor functions for the production of the output signal. In right-handers language functions are mainly located in the left hemisphere, while left-handers have a more symmetrical representation of language functions. Given that with the development of interpreting skills and a long work experience the interpreters’ brain becomes less lateralized for language functions, in an initial phase left-handers may be «neurobiologically better suited for interpreting tasks» (Gran and Fabbro 1988: 37). To test this hypothesis, 9 students (5 right-handers and 4 left-handers) participated in a dual test of simultaneous and consecutive interpretation (CI) from English into Italian. The subjects were asked to interpret one text with their preferred ear and the other with the non-preferred one, since according neuropsychology aural symmetry reflects cerebral symmetry. The aim of this study was to analyze:1) the differences between the number of errors in consecutive and simultaneous interpretation with the preferred and non-preferred ear; 2) the differences in performance (in terms of number of errors) between right-handed and left-handed, both with the preferred and non-preferred ear; 3) the most frequent types of errors in right and left-handers; 4) the influence of the degree of handedness on interpreting quality. The students’ performances were analyzed in terms of errors of meaning, errors of numbers, omissions of text, omissions of numbers, inaccuracies, errors of nexus, and unfinished sentences. The results showed that: 1) in SI subjects committed fewer errors interpreting with the preferred ear, whereas in CI a slight advantage of the non-preferred ear was observed. Moreover, in CI, right-handers committed fewer mistakes with the non-preferred ear than with the preferred one. 2) The total performance of left-handers proved to be better than that of right-handers. 3) In SI left-handers committed fewer errors of meaning and fewer errors of number than right-handers, whereas in CI left-handers committed fewer errors of meaning and more errors of number than right-handers 4) As the degree of left-handedness increases, the number of errors committed also increases. Moreover, there is a statistically significant left-ear advantage for right-handers and a right-ear one for left-handers. Finally, those who interpreted with their right ear committed fewer errors of number than those who have used their left ear or both ears.
Resumo:
This dissertation focuses on the phenomenon of amateur subtitling, known as fansubbing. Although this phenomenon began in the late ‘80s, in recent years amateur subtitling has spread worldwide, thanks to both Internet and fan communities, also known as fandoms. At first, amateur subtitling was mainly centred on the translation of Japanese cartoons, but nowadays fandoms also tend to subtitle other kinds of audiovisual products, such as American TV series. Thanks to fansubbing, which is created by fans for other fans, fandoms claim that they would prefer to have subtitled rather than dubbed versions of audiovisual products, which is the norm in Italy and Spain. The dissertation provides a linguistic analysis of the fansubbing in Spanish of the Italian TV series Romanzo Criminale. The purpose of this dissertation is to analyse fansubbing from the linguistic point of view, as well as from the point of view of the translation. Furthermore, it aims to evaluate to what extent this translation can be compared to professional subtitling. The first chapter offers an introduction to the TV series and provides an overview of the main events and characters. The second chapter deals with an analysis of the strategies that fansubbers use to translate cultural elements from Italian into Spanish. The third chapter focuses on linguistic mistakes due to calques and linguistic interference between Italian and Spanish. The fourth chapter provides an analysis of some translation errors which occurred during the decoding of the original text. The aim is to understand if this kind of mistake might jeopardize the comprehension of the original message.
Resumo:
A new fragile logo watermarking scheme is proposed for public authentication and integrity verification of images. The security of the proposed block-wise scheme relies on a public encryption algorithm and a hash function. The encoding and decoding methods can provide public detection capabilities even in the absence of the image indices and the original logos. Furthermore, the detector automatically authenticates input images and extracts possible multiple logos and image indices, which can be used not only to localise tampered regions, but also to identify the original source of images used to generate counterfeit images. Results are reported to illustrate the effectiveness of the proposed method.