919 resultados para Semantic Error
Resumo:
Cognitive radio is a growing zone in wireless communication which offers an opening in complete utilization of incompetently used frequency spectrum: deprived of crafting interference for the primary (authorized) user, the secondary user is indorsed to use the frequency band. Though, scheming a model with the least interference produced by the secondary user for primary user is a perplexing job. In this study we proposed a transmission model based on error correcting codes dealing with a countable number of pairs of primary and secondary users. However, we obtain an effective utilization of spectrum by the transmission of the pairs of primary and secondary users' data through the linear codes with different given lengths. Due to the techniques of error correcting codes we developed a number of schemes regarding an appropriate bandwidth distribution in cognitive radio.
Resumo:
In this paper we introduce a type of Hypercomplex Fourier Series based on Quaternions, and discuss on a Hypercomplex version of the Square of the Error Theorem. Since their discovery by Hamilton (Sinegre [1]), quaternions have provided beautifully insights either on the structure of different areas of Mathematics or in the connections of Mathematics with other fields. For instance: I) Pauli spin matrices used in Physics can be easily explained through quaternions analysis (Lan [2]); II) Fundamental theorem of Algebra (Eilenberg [3]), which asserts that the polynomial analysis in quaternions maps into itself the four dimensional sphere of all real quaternions, with the point infinity added, and the degree of this map is n. Motivated on earlier works by two of us on Power Series (Pendeza et al. [4]), and in a recent paper on Liouville’s Theorem (Borges and Mar˜o [5]), we obtain an Hypercomplex version of the Fourier Series, which hopefully can be used for the treatment of hypergeometric partial differential equations such as the dumped harmonic oscillation.
Resumo:
The focus of this paper is to address some classical results for a class of hypercomplex numbers. More specifically we present an extension of the Square of the Error Theorem and a Bessel inequality for octonions.
Resumo:
Corresponding to $C_{0}[n,n-r]$, a binary cyclic code generated by a primitive irreducible polynomial $p(X)\in \mathbb{F}_{2}[X]$ of degree $r=2b$, where $b\in \mathbb{Z}^{+}$, we can constitute a binary cyclic code $C[(n+1)^{3^{k}}-1,(n+1)^{3^{k}}-1-3^{k}r]$, which is generated by primitive irreducible generalized polynomial $p(X^{\frac{1}{3^{k}}})\in \mathbb{F}_{2}[X;\frac{1}{3^{k}}\mathbb{Z}_{0}]$ with degree $3^{k}r$, where $k\in \mathbb{Z}^{+}$. This new code $C$ improves the code rate and has error corrections capability higher than $C_{0}$. The purpose of this study is to establish a decoding procedure for $C_{0}$ by using $C$ in such a way that one can obtain an improved code rate and error-correcting capabilities for $C_{0}$.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
You published recently (Nature 374, 587; 1995) a report headed "Error re-opens 'scientific' whaling debate". The error in question, however, relates to commercial whaling, not to scientific whaling. Although Norway cites science as a basis for the way in which it sets its own quota. scientific whaling means something quite different. namely killing whales for research purposes. Any member of the International Whaling Commission (IWC) has the right to conduct a research catch under the International Convention for the Regulation of Whaling. 1946. The IWC has reviewed new research or scientific whaling programs for Japan and Norway since the IWC moratorium on commercial whaling began in 1986. In every case, the IWC advised Japan and Norway to reconsider the lethal aspects of their research programs. Last year, however, Norway started a commercial hunt in combination with its scientific catch, despite the IWC moratorium.
Resumo:
Even though the digital processing of documents is increasingly widespread in industry, printed documents are still largely in use. In order to process electronically the contents of printed documents, information must be extracted from digital images of documents. When dealing with complex documents, in which the contents of different regions and fields can be highly heterogeneous with respect to layout, printing quality and the utilization of fonts and typing standards, the reconstruction of the contents of documents from digital images can be a difficult problem. In the present article we present an efficient solution for this problem, in which the semantic contents of fields in a complex document are extracted from a digital image.
Resumo:
This work develops a computational approach for boundary and initial-value problems by using operational matrices, in order to run an evolutive process in a Hilbert space. Besides, upper bounds for errors in the solutions and in their derivatives can be estimated providing accuracy measures.
Resumo:
The Neotropical evaniid genus Evaniscus Szepligeti currently includes six species. Two new species are described, Evaniscus lansdownei Mullins, sp. n. from Colombia and Brazil and E. rafaeli Kawada, sp. n. from Brazil. Evaniscus sulcigenis Roman, syn. n., is synonymized under E. rufithorax Enderlein. An identification key to species of Evaniscus is provided. Thirty-five parsimony informative morphological characters are analyzed for six ingroup and four outgroup taxa. A topology resulting in a monophyletic Evaniscus is presented with E. tibialis and E. rafaeli as sister to the remaining Evaniscus species. The Hymenoptera Anatomy Ontology and other relevant biomedical ontologies are employed to create semantic phenotype statements in Entity-Quality (EQ) format for species descriptions. This approach is an early effort to formalize species descriptions and to make descriptive data available to other domains.
Resumo:
Estimates of evapotranspiration on a local scale is important information for agricultural and hydrological practices. However, equations to estimate potential evapotranspiration based only on temperature data, which are simple to use, are usually less trustworthy than the Food and Agriculture Organization (FAO)Penman-Monteith standard method. The present work describes two correction procedures for potential evapotranspiration estimates by temperature, making the results more reliable. Initially, the standard FAO-Penman-Monteith method was evaluated with a complete climatologic data set for the period between 2002 and 2006. Then temperature-based estimates by Camargo and Jensen-Haise methods have been adjusted by error autocorrelation evaluated in biweekly and monthly periods. In a second adjustment, simple linear regression was applied. The adjusted equations have been validated with climatic data available for the Year 2001. Both proposed methodologies showed good agreement with the standard method indicating that the methodology can be used for local potential evapotranspiration estimates.
Resumo:
Background: Early progressive nonfluent aphasia (PNFA) may be difficult to differentiate from semantic dementia (SD) in a nonspecialist setting. There are descriptions of the clinical and neuropsychological profiles of patients with PNFA and SD but few systematic comparisons. Method: We compared the performance of groups with SD (n = 27) and PNFA (n = 16) with comparable ages, education, disease duration, and severity of dementia as measured by the Clinical Dementia Rating Scale on a comprehensive neuropsychological battery. Principal components analysis and intergroup comparisons were used. Results: A 5-factor solution accounted for 78.4% of the total variance with good separation of neuropsychological variables. As expected, both groups were anomic with preserved visuospatial function and mental speed. Patients with SD had lower scores on comprehension-based semantic tests and better performance on verbal working memory and phonological processing tasks. The opposite pattern was found in the PNFA group. Conclusions: Neuropsychological tests that examine verbal and nonverbal semantic associations, verbal working memory, and phonological processing are the most helpful for distinguishing between PNFA and SD.
Resumo:
The scope of this study was to estimate calibrated values for dietary data obtained by the Food Frequency Questionnaire for Adolescents (FFQA) and illustrate the effect of this approach on food consumption data. The adolescents were assessed on two occasions, with an average interval of twelve months. In 2004, 393 adolescents participated, and 289 were then reassessed in 2005. Dietary data obtained by the FFQA were calibrated using the regression coefficients estimated from the average of two 24-hour recalls (24HR) of the subsample. The calibrated values were similar to the the 24HR reference measurement in the subsample. In 2004 and 2005 a significant difference was observed between the average consumption levels of the FFQA before and after calibration for all nutrients. With the use of calibrated data the proportion of schoolchildren who had fiber intake below the recommended level increased. Therefore, it is seen that calibrated data can be used to obtain adjusted associations due to reclassification of subjects within the predetermined categories.
Resumo:
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
Resumo:
STUDY DESIGN: Clinical measurement. OBJECTIVE: To translate and culturally adapt the Lower Extremity Functional Scale (LEFS) into a Brazilian Portuguese version, and to test the construct and content validity and reliability of this version in patients with knee injuries. BACKGROUND: There is no Brazilian Portuguese version of an instrument to assess the function of the lower extremity after orthopaedic injury. METHODS: The translation of the original English version of the LEFS into a Brazilian Portuguese version was accomplished using standard guidelines and tested in 31 patients with knee injuries. Subsequently, 87 patients with a variety of knee disorders completed the Brazilian Portuguese LEES, the Medical Outcomes Study 36-Item Short-Form Health Survey, the Western Ontario and McMaster Universities Osteoarthritis Index, and the International Knee Documentation Committee Subjective Knee Evaluation Form and a visual analog scale for pain. All patients were retested within 2 days to determine reliability of these measures. Validation was assessed by determining the level of association between the Brazilian Portuguese LEFS and the other outcome measures. Reliability was documented by calculating internal consistency, test-retest reliability, and standard error of measurement. RESULTS: The Brazilian Portuguese LEES had a high level of association with the physical component of the Medical Outcomes Study 36-Item Short-Form Health Survey (r = 0.82), the Western Ontario and McMaster Universities Osteoarthritis Index (r = 0.87), the International Knee Documentation Committee Subjective Knee Evaluation Form (r = 0.82), and the pain visual analog scale (r = -0.60) (all, P<.05). The Brazilian Portuguese LEES had a low level of association with the mental component of the Medical Outcomes Study 36-Item Short-Form Health Survey (r = 0.38, P<.05). The internal consistency (Cronbach alpha = .952) and test-retest reliability (intraclass correlation coefficient = 0.957) of the Brazilian Portuguese version of the LEES were high. The standard error of measurement was low (3.6) and the agreement was considered high, demonstrated by the small differences between test and retest and the narrow limit of agreement, as observed in Bland-Altman and survival-agreement plots. CONCLUSION: The translation of the LEFS into a Brazilian Portuguese version was successful in preserving the semantic and measurement properties of the original version and was shown to be valid and reliable in a Brazilian population with knee injuries. J Ort hop Sports Phys Ther 2012;42(11):932-939, Epub 9 October 2012. doi:10.2519/jospt.2012.4101
Resumo:
With the increase in research on the components of Body Image, validated instruments are needed to evaluate its dimensions. The Body Change Inventory (BCI) assesses strategies used to alter body size among adolescents. The scope of this study was to describe the translation and evaluation for semantic equivalence of the BCI in the Portuguese language. The process involved the steps of (1) translation of the questionnaire to the Portuguese language; (2) back-translation to English; (3) evaluation of semantic equivalence; and (4) assessment of comprehension by professional experts and the target population. The six subscales of the instrument were translated into the Portuguese language. Language adaptations were made to render the instrument suitable for the Brazilian reality. The questions were interpreted as easily understandable by both experts and young people. The Body Change Inventory has been translated and adapted into Portuguese. Evaluation of the operational, measurement and functional equivalence are still needed.