922 resultados para Error Correction Coding, Error Resilience, MPEG-4, Video Coding
Resumo:
Cognitive radio is a growing zone in wireless communication which offers an opening in complete utilization of incompetently used frequency spectrum: deprived of crafting interference for the primary (authorized) user, the secondary user is indorsed to use the frequency band. Though, scheming a model with the least interference produced by the secondary user for primary user is a perplexing job. In this study we proposed a transmission model based on error correcting codes dealing with a countable number of pairs of primary and secondary users. However, we obtain an effective utilization of spectrum by the transmission of the pairs of primary and secondary users' data through the linear codes with different given lengths. Due to the techniques of error correcting codes we developed a number of schemes regarding an appropriate bandwidth distribution in cognitive radio.
Resumo:
Background: Next-generation sequencing (NGS) allows for sampling numerous viral variants from infected patients. This provides a novel opportunity to represent and study the mutational landscape of Hepatitis C Virus (HCV) within a single host.Results: Intra-host variants of the HCV E1/E2 region were extensively sampled from 58 chronically infected patients. After NGS error correction, the average number of reads and variants obtained from each sample were 3202 and 464, respectively. The distance between each pair of variants was calculated and networks were created for each patient, where each node is a variant and two nodes are connected by a link if the nucleotide distance between them is 1. The work focused on large components having > 5% of all reads, which in average account for 93.7% of all reads found in a patient. The distance between any two variants calculated over the component correlated strongly with nucleotide distances (r = 0.9499; p = 0.0001), a better correlation than the one obtained with Neighbour-Joining trees (r = 0.7624; p = 0.0001). In each patient, components were well separated, with the average distance between (6.53%) being 10 times greater than within each component (0.68%). The ratio of nonsynonymous to synonymous changes was calculated and some patients (6.9%) showed a mixture of networks under strong negative and positive selection. All components were robust to in silico stochastic sampling; even after randomly removing 85% of all reads, the largest connected component in the new subsample still involved 82.4% of remaining nodes. In vitro sampling showed that 93.02% of components present in the original sample were also found in experimental replicas, with 81.6% of reads found in both. When syringe-sharing transmission events were simulated, 91.2% of all simulated transmission events seeded all components present in the source.Conclusions: Most intra-host variants are organized into distinct single-mutation components that are: well separated from each other, represent genetic distances between viral variants, robust to sampling, reproducible and likely seeded during transmission events. Facilitated by NGS, large components offer a novel evolutionary framework for genetic analysis of intra-host viral populations and understanding transmission, immune escape and drug resistance.
Resumo:
The aim of this study is to determine whether Brazil's economic growth has been constrained by the balance of payments in the long run. The question underpinning the analysis can be expressed as follows: Was economic growth in the period 1951-2008 constrained by the balance of payments? To answer this question, the study employs the externally constrained growth methodology developed by Lima and Carvalho (2009), among others. The main statistical method used is vector error correction. The conclusion is that the rate of economic growth in Brazil was restricted by the external sector in the period concerned, validating the theory of balance-of-payments growth constraint with regard to the economic history of Brazil.
Resumo:
The focus of this paper is to address some classical results for a class of hypercomplex numbers. More specifically we present an extension of the Square of the Error Theorem and a Bessel inequality for octonions.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The number of electronic devices connected to agricultural machinery is increasing to support new agricultural practices tasks related to the Precision Agriculture such as spatial variability mapping and Variable Rate Technology (VRT). The Distributed Control System (DCS) is a suitable solution for decentralization of the data acquisition system and the Controller Area Network (CAN) is the major trend among the embedded communications protocols for agricultural machinery and vehicles. The application of soil correctives is a typical problem in Brazil. The efficiency of this correction process is highly dependent of the inputs way at soil and the occurrence of errors affects directly the agricultural yield. To handle this problem, this paper presents the development of a CAN-based distributed control system for a VRT system of soil corrective in agricultural machinery. The VRT system is composed by a tractor-implement that applies a desired rate of inputs according to the georeferenced prescription map of the farm field to support PA (Precision Agriculture). The performance evaluation of the CAN-based VRT system was done by experimental tests and analyzing the CAN messages transmitted in the operation of the entire system. The results of the control error according to the necessity of agricultural application allow conclude that the developed VRT system is suitable for the agricultural productions reaching an acceptable response time and application error. The CAN-Based DCS solution applied in the VRT system reduced the complexity of the control system, easing the installation and maintenance. The use of VRT system allowed applying only the required inputs, increasing the efficiency operation and minimizing the environmental impact.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
You published recently (Nature 374, 587; 1995) a report headed "Error re-opens 'scientific' whaling debate". The error in question, however, relates to commercial whaling, not to scientific whaling. Although Norway cites science as a basis for the way in which it sets its own quota. scientific whaling means something quite different. namely killing whales for research purposes. Any member of the International Whaling Commission (IWC) has the right to conduct a research catch under the International Convention for the Regulation of Whaling. 1946. The IWC has reviewed new research or scientific whaling programs for Japan and Norway since the IWC moratorium on commercial whaling began in 1986. In every case, the IWC advised Japan and Norway to reconsider the lethal aspects of their research programs. Last year, however, Norway started a commercial hunt in combination with its scientific catch, despite the IWC moratorium.
Resumo:
I. Gunter and Christmas (1973) described the events leading to the stranding of a baleen whale on Ship Island, Mississippi, in 1968, giving the species as Balaenopteru physalus, the Rorqual. Unfortunately the identification was in error, but fortunately good photographs were shown. The underside of the tail was a splotched white, but there was no black margin. The specimen also had fewer throat and belly grooves than the Rorqual, as a comparison with True’s (1904) photograph shows. Dr. James Mead (in litt.) pointed out that the animal was a Sei Whale, Balaenoptera borealis. This remains a new Mississippi record and according to Lowery’s (1974) count, it is the fifth specimen reported from the Gulf of Mexico. The stranding of a sixth Sei Whale on Anclote Keys in the Gulf, west of Tarpon Springs, Florida on 30 May 1974, was reported in the newspapers and by the Smithsonian Institution (1974). II. Gunter, Hubbs and Beal (1955) gave measurements on a Pygmy Sperm Whale, Kogia breviceps, which stranded on Mustang Island on the Texas coast and commented upon the recorded variations of proportional measurements in this species. Then according to Raun, Hoese and Moseley (1970) these questions were resolved by Handley (1966), who showed that a second species, Kogia simus, the Dwarf Sperm Whale, is also present in the western North Atlantic. Handley’s argument is based on skull comparisons and it seems to be rather indubitable. According to Raun et al. (op. cit.), the stranding of a species of Kogia on Galveston Island recorded by Caldwell, Ingles and Siebenaler (1960) was K. simus. They also say that Caldwell (in litt.) had previously come to the same conclusion. Caldwell et al. also recorded another specimen from Destin, Florida, which is now considered to have been a specimen of simus. The known status of these two little sperm whales in the Gulf is summarized by Lowery (op. cit.).
Resumo:
Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.
Resumo:
This work develops a computational approach for boundary and initial-value problems by using operational matrices, in order to run an evolutive process in a Hilbert space. Besides, upper bounds for errors in the solutions and in their derivatives can be estimated providing accuracy measures.
Resumo:
When a physical activity professional is teaching a motor skill, he evaluates the movement's learner and considers which interventions could be done at the moment. However, many times the instructor does not have such resources which could help him/her to evaluate the learner movement. The skill acquisition process could be facilitated if instructors could have an instrument that identifies errors, prioritizing information to be given to the learner. Considering that the specialized literature presents a lack of information about such tool, the purpose of this study was to develop, and to determine the objectivity and reliability of an instrument to assess the movement quality of the basketball free throw shooting. The checklist was developed and evaluated by basketball experts. Additionally, the checklist was used to assess 10 trials (edited video) from four individuals in different learning stages. Data were organized by the critical error and the error sum appointed by the experts in two different occasions (one week interval). Contrasting both evaluations, and also, contrasting different experts assessments, in sum and critical error, it was observed an average error of 16.9%. It was concluded that the checklist to assess the basketball free throw is reliable, and could help instructors to make a qualitative analysis. Moreover, the checklist may allow instructors to make assumptions on the motor learning process.
Resumo:
In this paper, we perform a thorough analysis of a spectral phase-encoded time spreading optical code division multiple access (SPECTS-OCDMA) system based on Walsh-Hadamard (W-H) codes aiming not only at finding optimal code-set selections but also at assessing its loss of security due to crosstalk. We prove that an inadequate choice of codes can make the crosstalk between active users to become large enough so as to cause the data from the user of interest to be detected by other user. The proposed algorithm for code optimization targets code sets that produce minimum bit error rate (BER) among all codes for a specific number of simultaneous users. This methodology allows us to find optimal code sets for any OCDMA system, regardless the code family used and the number of active users. This procedure is crucial for circumventing the unexpected lack of security due to crosstalk. We also show that a SPECTS-OCDMA system based on W-H 32(64) fundamentally limits the number of simultaneous users to 4(8) with no security violation due to crosstalk. More importantly, we prove that only a small fraction of the available code sets is actually immune to crosstalk with acceptable BER (<10(-9)) i.e., approximately 0.5% for W-H 32 with four simultaneous users, and about 1 x 10(-4)% for W-H 64 with eight simultaneous users.
Resumo:
The presence of cognitive impairment is a frequent complaint among elderly individuals in the general population. This study aimed to investigate the relationship between aging-related regional gray matter (rGM) volume changes and cognitive performance in healthy elderly adults. Morphometric magnetic resonance imaging (MRI) measures were acquired in a community-based sample of 170 cognitively-preserved subjects (66 to 75 years). This sample was drawn from the "Sao Paulo Ageing and Health" study, an epidemiological study aimed at investigating the prevalence and risk factors for Alzheimer's disease in a low income region of the city of Sao Paulo. All subjects underwent cognitive testing using a cross-culturally battery validated by the Research Group on Dementia 10/66 as well as the SKT (applied on the day of MRI scanning). Blood genotyping was performed to determine the frequency of the three apolipoprotein E allele variants (APOE epsilon 2/epsilon 3/epsilon 4) in the sample. Voxelwise linear correlation analyses between rGM volumes and cognitive test scores were performed using voxel-based morphometry, including chronological age as covariate. There were significant direct correlations between worse overall cognitive performance and rGM reductions in the right orbitofrontal cortex and parahippocampal gyrus, and also between verbal fluency scores and bilateral parahippocampal gyral volume (p < 0.05, familywise-error corrected for multiple comparisons using small volume correction). When analyses were repeated adding the presence of the APOE epsilon 4 allele as confounding covariate or excluding a minority of APOE epsilon 2 carriers, all findings retained significance. These results indicate that rGM volumes are relevant biomarkers of cognitive deficits in healthy aging individuals, most notably involving temporolimbic regions and the orbitofrontal cortex.
Resumo:
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.