936 resultados para MRD codes
Resumo:
Let B[X; S] be a monoid ring with any fixed finite unitary commutative ring B and is the monoid S such that b = a + 1, where a is any positive integer. In this paper we constructed cyclic codes, BCH codes, alternant codes, Goppa codes, Srivastava codes through monoid ring . For a = 1, almost all the results contained in [16] stands as a very particular case of this study.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This thesis regards the Wireless Sensor Network (WSN), as one of the most important technologies for the twenty-first century and the implementation of different packet correcting erasure codes to cope with the ”bursty” nature of the transmission channel and the possibility of packet losses during the transmission. The limited battery capacity of each sensor node makes the minimization of the power consumption one of the primary concerns in WSN. Considering also the fact that in each sensor node the communication is considerably more expensive than computation, this motivates the core idea to invest computation within the network whenever possible to safe on communication costs. The goal of the research was to evaluate a parameter, for example the Packet Erasure Ratio (PER), that permit to verify the functionality and the behavior of the created network, validate the theoretical expectations and evaluate the convenience of introducing the recovery packet techniques using different types of packet erasure codes in different types of networks. Thus, considering all the constrains of energy consumption in WSN, the topic of this thesis is to try to minimize it by introducing encoding/decoding algorithms in the transmission chain in order to prevent the retransmission of the erased packets through the Packet Erasure Channel and save the energy used for each retransmitted packet. In this way it is possible extend the lifetime of entire network.
Resumo:
This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.
Resumo:
The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.
Resumo:
Die Entstehung und Evolution des genetischen Codes, der die Nukleotidsequenz der mRNA in die Aminosäuresequenz der Proteine übersetzt, zählen zu den größten Rätseln der Biologie. Die ersten Organismen, die vor etwa 3,8 Milliarden Jahren auf der Erde auftraten, nutzten einen ursprünglichen genetischen Code, der vermutlich ausschließlich abiotisch verfügbare Aminosäuren terrestrischer oder extraterrestrischer Herkunft umfasste. Neue Aminosäuren wurden sukzessive biosynthetisiert und selektiv in den Code aufgenommen, welcher in der modernen Form aus bis zu 22 Aminosäuren besteht. Die Ursachen für die Selektion und die Chronologie ihrer Aufnahme sind bis heute unbekannt und sollten im Rahmen der vorliegenden Arbeit erforscht werden. Auf Grundlage quanten-chemischer Berechnungen konnte in dieser Arbeit zunächst ein Zusammenhang zwischen der HOMO-LUMO-Energiedifferenz (H-L-Distanz), die ein inverses quanten-chemisches Korrelat für allgemeine chemische Reaktivität darstellt, und der chronologischen Aufnahme der Aminosäuren in den genetischen Code aufgezeigt werden. Demnach sind ursprüngliche Aminosäuren durch große H-L-Distanzen und neue Aminosäuren durch kleine H-L-Distanzen gekennzeichnet. Bei einer Analyse des Metabolismus von Tyrosin und Tryptophan, bei denen es sich um die beiden jüngsten Standard-Aminosäuren handelt, wurde ihre Bedeutung als Vorläufer von Strukturen ersichtlich, die sich durch eine hohe Redox-Aktivität auszeichnen und deren Synthese gleichzeitig molekularen Sauerstoff erfordert. Aus diesem Grund wurden die Redox-Aktivitäten der 20 Standard-Aminosäuren gegenüber Peroxylradikalen und weiteren Radikalen getestet. Die Untersuchungen ergaben eine Korrelation zwischen evolutionärem Auftreten und chemischer Reaktivität der jeweiligen Aminosäure, die sich insbesondere in der effizienten Reaktion zwischen Tryptophan bzw. Tyrosin und Peroxylradikalen widerspiegelte. Dies indizierte eine potentielle Bedeutung reaktiver Sauerstoffspezies (ROS) bei der Konstituierung des genetischen Codes. Signifikante Mengen an ROS wurden erst zu Beginn der Oxygenierung der Geobiosphäre, die als Great Oxidation Event (GOE) bezeichnet wird und vor circa 2,3 Milliarden Jahren begann, gebildet und müssen zur oxidativen Schädigung vulnerabler, zellulärer Strukturen geführt haben. Aus diesem Grund wurde das antioxidative Potential von Aminosäuren beim Prozess der Lipidperoxidation untersucht. Es konnte gezeigt werden, dass lipophile Derivate von Tryptophan und Tyrosin befähigt sind, die Peroxidation von Rattenhirnmembranen zu verhindern und humane Fibroblasten vor oxidativem Zelltod zu schützen. Daraus gründete sich das in dieser Arbeit aufgestellte Postulat eines Selektionsvorteils primordialer Organismen während des GOEs, die Tryptophan und Tyrosin als redox-aktive Aminosäuren in Membranproteine einbauen konnten und somit vor Oxidationsprozessen geschützt waren. Demzufolge wurde die biochemische Reaktivität als Selektionsparameter sowie oxidativer Stress als prägender Faktor der Evolution des genetischen Codes identifiziert.
Resumo:
I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.
Resumo:
We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is given about how learning performance depends on population size and task complexity. Next, we extend the basic model to n-ary decision making and show that it can also be used in conjunction with other population codes such as rate or even latency coding.