990 resultados para Forward error correcting code


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rodrigo, Chamizo, McLaren, & Mackintosh (1997) demonstrated the blocking effect in a navigational task using a swimming pool: rats initially trained to use three landmarks (ABC) to find an invisible platform learned less about a fourth landmark (X) added later than did rats trained from the outset with these four landmarks (ABCX). The aim of the experiment reported here was to demonstrate unblocking using a similar procedure as in the previous work. Three groups of rats were initially trained to find an invisible platfom in the presence of three landmarks: ABC for the Blocking and Unblocking groups and LMN for the Control group. Then, all animals were trained to find the platform in the presence of four landmarks, ABCX. In this second training, unlike animals in the Blocking group to which only a new landmark (X) was added in comparison to the first training, the animals in the Unblocking group also had a change in the platform position. In the Control group, both the four landmarks and the platform position were totally new at the beginning of this second training. As in Rodrigo et al. (1997) a blocking effect was found: rats in the Blocking group learned less with respect to the added landmark (X) than did animals in the Control group. However, rats in the Unblocking group learned about the added landmark (X) as well as did animals in the Control group. The results are interpreted as an unblocking effect due to a change in the platform position between the two phases of training, similarly to what is normal in classical conditioning experiments, in which a change in the conditions of reinforcement between the two training phases of a blocking design produce an attenuation or elimination of this effect. These results are explained within an error-correcting connectionist account of spatial navigation (McLaren, 2002).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are numerous text documents available in electronic form. More and more are becoming available every day. Such documents represent a massive amount of information that is easily accessible. Seeking value in this huge collection requires organization; much of the work of organizing documents can be automated through text classification. The accuracy and our understanding of such systems greatly influences their usefulness. In this paper, we seek 1) to advance the understanding of commonly used text classification techniques, and 2) through that understanding, improve the tools that are available for text classification. We begin by clarifying the assumptions made in the derivation of Naive Bayes, noting basic properties and proposing ways for its extension and improvement. Next, we investigate the quality of Naive Bayes parameter estimates and their impact on classification. Our analysis leads to a theorem which gives an explanation for the improvements that can be found in multiclass classification with Naive Bayes using Error-Correcting Output Codes. We use experimental evidence on two commonly-used data sets to exhibit an application of the theorem. Finally, we show fundamental flaws in a commonly-used feature selection algorithm and develop a statistics-based framework for text feature selection. Greater understanding of Naive Bayes and the properties of text allows us to make better use of it in text classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compare Naive Bayes and Support Vector Machines on the task of multiclass text classification. Using a variety of approaches to combine the underlying binary classifiers, we find that SVMs substantially outperform Naive Bayes. We present full multiclass results on two well-known text data sets, including the lowest error to date on both data sets. We develop a new indicator of binary performance to show that the SVM's lower multiclass error is a result of its improved binary performance. Furthermore, we demonstrate and explore the surprising result that one-vs-all classification performs favorably compared to other approaches even though it has no error-correcting properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Architectures based on Coordinated Atomic action (CA action) concepts have been used to build concurrent fault-tolerant systems. This conceptual model combines concurrent exception handling with action nesting to provide a general mechanism for both enclosing interactions among system components and coordinating forward error recovery measures. This article presents an architectural model to guide the formal specification of concurrent fault-tolerant systems. This architecture provides built-in Communicating Sequential Processes (CSPs) and predefined channels to coordinate exception handling of the user-defined components. Hence some safety properties concerning action scoping and concurrent exception handling can be proved by using the FDR (Failure Divergence Refinement) verification tool. As a result, a formal and general architecture supporting software fault tolerance is ready to be used and proved as users define components with normal and exceptional behaviors. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Redes em Malha sem Fio ( do inglês Wireless Mesh Networks - WMNs) são previstas serem uma das mais importantes tecnologias sem fio no que se refere ao fornecimento do acesso de última milha em redes multimídia futuras. Elas vão permitir que milhares de usuários fixos e móveis acessem, produzam e compartilhem conteúdo multimídia de forma onipresente. Neste contexto, vídeo 3D está previsto atrair mais e mais o mercado multimídia com a perspectiva de reforçar as aplicações (vídeos de vigilância, controle demissões críticas, entretenimento, etc). No entanto, o desafio de lidar com a largura de banda optante, escassez de recursos e taxas de erros variantes com o tempo destas redes, ilustra a necessidade da transmissão de vídeos 3D mais resistentes a erros. Dessa forma, alternativas como abordagens de Correção Antecipada de Erros (FEC) se tornam necessárias para fornecer a distribuição de aplicações de vídeo para usuários sem fio com garantia de melhor qualidade de serviço (QoS) e Qualidade de Experiência (QoE). Esta dissertação apresenta um mecanismo baseado em FEC com Proteção Desigual de Erros (UEP) para melhorar a transmissão de vídeo 3D em WMNs, aumentando a satisfação do usuário e permitindo uma melhoria do uso dos recursos sem fio. Os benefícios e impactos do mecanismo proposto serão demonstrados usando simulação e a avaliação será realizada através de métricas de QoE objetivas e subjetivas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho apresenta um estudo sobre transmissões de vídeo em sistemas sem fio. O objetivo da metodologia aplicada é comprovar a existência de uma relação direta entre a BER e a perda de qualidade (Perda de PSNR) nas transmissões de vídeo em sistemas OFDM (Orthogonal Frequency Division Multiplexing). Os resultados foram obtidos a partir de simulações, desenvolvidas no ambiente computacional Matlab®, e, aferições em cenários reais, realizadas no campus universitário e dentro do laboratório de estudos, em ambiente controlado. A partir da comparação entre dados simulados e aferidos, foi comprovada a relação entre BER e Perda de PSNR, resultando na formulação de um modelo empírico Cross-Layer com característica exponencial. A modelagem obteve erro RMS e desvio padrão próximos de 1,65 dB quando comparada com as simulações. Além disso, sua validação foi realizada a partir dos dados obtidos de cenários reais, que não foram usados para ajustar os parâmetros da equação obtida. O modelo obtido não necessita da especificação do tipo de canal ou codificação utilizada no FEC (Forward Error Correction), possibilitando uma futura integração com softwares de planejamento de redes, em versões comerciais ou open-sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Matemática em Rede Nacional - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation concerns the intersection of three areas of discrete mathematics: finite geometries, design theory, and coding theory. The central theme is the power of finite geometry designs, which are constructed from the points and t-dimensional subspaces of a projective or affine geometry. We use these designs to construct and analyze combinatorial objects which inherit their best properties from these geometric structures. A central question in the study of finite geometry designs is Hamada’s conjecture, which proposes that finite geometry designs are the unique designs with minimum p-rank among all designs with the same parameters. In this dissertation, we will examine several questions related to Hamada’s conjecture, including the existence of counterexamples. We will also study the applicability of certain decoding methods to known counterexamples. We begin by constructing an infinite family of counterexamples to Hamada’s conjecture. These designs are the first infinite class of counterexamples for the affine case of Hamada’s conjecture. We further demonstrate how these designs, along with the projective polarity designs of Jungnickel and Tonchev, admit majority-logic decoding schemes. The codes obtained from these polarity designs attain error-correcting performance which is, in certain cases, equal to that of the finite geometry designs from which they are derived. This further demonstrates the highly geometric structure maintained by these designs. Finite geometries also help us construct several types of quantum error-correcting codes. We use relatives of finite geometry designs to construct infinite families of q-ary quantum stabilizer codes. We also construct entanglement-assisted quantum error-correcting codes (EAQECCs) which admit a particularly efficient and effective error-correcting scheme, while also providing the first general method for constructing these quantum codes with known parameters and desirable properties. Finite geometry designs are used to give exceptional examples of these codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Gangdese belt, Tibet, records the opening and closure of the Neo-Tethyan ocean and the resultant collision between the Indian and Eurasian plates. Mesozoic magmatic rocks generated through subduction of the Tethyan oceanic slab constitute the main component of the Gangdese belt, and play a crucial role in understanding the formation and evolution of the Neo-Tethyan tectonic realm. U-Pb and Lu-Hf isotopic data for tonalite and granodiorite from the Xietongmen-Nymo segment of the Gangdese belt indicate a significant pulse of Jurassic magmatism from 184 Ma to 168 Ma. The magmatic rocks belong to metaluminous medium-K calc-alkaline series, characterized by regular variation in major element compositions with SiO2 of 61.35%-73.59 wt%, low to moderate MgO (0.31%-2.59%) with Mg# of 37-45. These magmatic rocks are also characterized by LREE enrichment with concave upward trend in MREE on the chondrite-normalized REE patterns, and also LILE enrichment and depletion in Nb, Ta and Ti in the primitive mantle normalized spidergrams. These rocks have high zircon ?Hf(t) values of + 10.94 to + 15.91 and young two-stage depleted mantle model ages (TDM2) of 192 Ma to 670 Ma. The low MgO contents and relatively depleted Hf isotope compositions, suggest that the granitoid rocks were derived from the partial melting of the juvenile basaltic lower crust with minor mantle materials injected. In combined with the published data, it is suggested that northward subduction of the Neo-Tethyan slab beneath the Lhasa terrane began by the Late-Triassic, which formed a major belt of arc-related magmatism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Duolong porphyry Cu-Au deposit (5.4 Mt at 0.72% Cu, 41 t at 0.23 g/t Au), which is related to the granodiorite porphyry and the quartz-diorite porphyry from the Bangongco copper belt in central Tibet, formed in a continental arc setting. Here, we present the zircon U-Pb ages, geochemical whole-rock, Sr-Nd whole-rock and zircon in-situ Hf-O isotopic data for the Duolong porphyries. Secondary ion mass spectrometry (SIMS) zircon U-Pb analyses for six samples yielded consistent ages of ~118 Ma, indicating a Cretaceous formation age. The Duolong porphyries (SiO2 of 58.81-68.81 wt.%, K2O of 2.90-5.17 wt.%) belong to the high-K calc-alkaline series. They show light rare earth element (LREE)-enriched distribution patterns with (La/Yb)N = 6.1-11.7, enrichment in large ion lithophile elements (e.g., Cs, Rb, and Ba) and depletion of high field strength elements (e.g., Nb), with negative Ti anomalies. All zircons from the Duolong porphyries share relatively similar Hf-O isotopic compositions (d18O=5.88-7.27 per mil; eHf(t)=3.6-7.3), indicating that they crystallized from a series of cogenetic melts with various degrees of fractional crystallization. This, along with the general absence of older inherited zircons, rules out significant crustal contamination during zircon growth. The zircons are mostly enriched in d18O relative to mantle values, indicating the involvement of an 18O-enriched crustal source in the generation of the Duolong porphyries. Together with the presence of syn-mineralization basaltic andesite, the mixing between silicic melts derived from the lower crust and evolved H2O-rich mafic melts derived from the metsomatizied mantle wedge, followed by subsequent fractional crystallization (FC) and minor crustal contamination in the shallow crust, could well explain the petrogenesis of the Duolong porphyries. Significantly, the hybrid melts possibly inherited the arc magma characteristics of abundant F, Cl, Cu, and Au elements and high oxidation state, which contributed to the formation of the Duolong porphyry Cu-Au deposit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global warming was reported to cause growth reductions in tropical shallow water corals in both, cooler and warmer, regions of the coral species range. This suggests regional adaptation with less heat-tolerant populations in cooler and more thermo-tolerant populations in warmer regions. Here, we investigated seasonal changes in the in situ metabolic performance of the widely distributed hermatypic coral Pocillopora verrucosa along 12 degrees latitudes featuring a steep temperature gradient between the northern (28.5 degrees N, 21-27 degrees C) and southern (16.5 degrees N, 28-33 degrees C) reaches of the Red Sea. Surprisingly, we found little indication for regional adaptation, but strong indications for high phenotypic plasticity: Calcification rates in two seasons (winter, summer) were found to be highest at 28-29 degrees C throughout all populations independent of their geographic location. Mucus release increased with temperature and nutrient supply, both being highest in the south. Genetic characterization of the coral host revealed low inter-regional variation and differences in the Symbiodinium clade composition only at the most northern and most southern region. This suggests variable acclimatization potential to ocean warming of coral populations across the Red Sea: high acclimatization potential in northern populations, but limited ability to cope with ocean warming in southern populations already existing at the upper thermal margin for corals