992 resultados para Non-binary arithmetic
Resumo:
We present here an information reconciliation method and demonstrate for the first time that it can achieve efficiencies close to 0.98. This method is based on the belief propagation decoding of non-binary LDPC codes over finite (Galois) fields. In particular, for convenience and faster decoding we only consider power-of-two Galois fields.
(In)Visibili. Difficoltà, scelte e implicazioni nella mediazione nazionale dei personaggi non-binary
Resumo:
Con il progressivo aumentare del numero di personaggi non-binary nelle serie televisive in lingua inglese, dal punto di vista della mediazione nazionale italiana si va a delineare un interessante campo di studio, determinato dallo scontro fra la predilezione, da parte di questi personaggi, di pronomi ed espressioni neutre e la struttura grammaticale italiana, basata, invece, sull’esclusiva contrapposizione fra maschile e femminile. Il presente elaborato, allora, si pone l’obiettivo di individuare, attraverso una selezione di casi di studio, le difficoltà che sorgono automaticamente nel momento della realizzazione dell’edizione italiana di queste serie, le strategie adottate ai fini di rendere neutro il linguaggio italiano e le loro implicazioni nei confronti dei personaggi non-binary. Il metodo di studio consiste nel confronto fra la versione originale e l’edizione italiana (comprensiva di doppiaggio e sottotitoli) delle battute relative ai personaggi non binari di riferimento, con conseguente analisi delle differenze e somiglianze rilevate. I tre casi presi in considerazione nel corso della trattazione (Sex Education, One Day at a Time e Grey’s Anatomy) permettono, in definitiva, di individuare come rischio primario l’invisibilità del non binarismo di genere agli occhi e alle orecchie degli spettatori italiani e, così, lo snaturamento dell’intento e del valore dell’edizione originale.
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).
Resumo:
OBJETIVO: Descrever a prevalência e analisar fatores associados ao retardo estatural em menores de cinco anos. MÉTODOS: Estudo “baseline”, que analisou 2.040 menores de cinco anos, verificando possíveis associações entre o retardo estatural (índice altura/idade ≤ 2 escores Z) e variáveis hierarquizadas em seis blocos: socioeconômicas, do domicílio, do saneamento, maternas, biológicas e de acesso aos serviços de saúde. A análise multivariada foi realizada por regressão de Poisson, com opção de erro padrão robusto, obtendo-se as razões de prevalência ajustadas, com IC 95por cento e respectivos valores de significância. RESULTADOS: Entre as variáveis não dicotômicas, houve associação positiva com tipo de teto e número de moradores por cômodo e associação negativa com renda, escolaridade da mãe e peso ao nascer. A análise ajustada indicou ainda como variáveis significantes: abastecimento de água, visita do agente comunitário de saúde, local do parto, internação por diarréia e internação por pneumonia. CONCLUSÃO: Os fatores identificados como de risco para o retardo estatural configuram a multicausalidade do problema, implicando na necessidade de intervenções multisetoriais e multiníveis para o seu controle
Resumo:
Finding large deletion correcting codes is an important issue in coding theory. Many researchers have studied this topic over the years. Varshamov and Tenegolts constructed the Varshamov-Tenengolts codes (VT codes) and Levenshtein showed the Varshamov-Tenengolts codes are perfect binary one-deletion correcting codes in 1992. Tenegolts constructed T codes to handle the non-binary cases. However the T codes are neither optimal nor perfect, which means some progress can be established. Latterly, Bours showed that perfect deletion-correcting codes have a close relationship with design theory. By this approach, Wang and Yin constructed perfect 5-deletion correcting codes of length 7 for large alphabet size. For our research, we focus on how to extend or combinatorially construct large codes with longer length, few deletions and small but non-binary alphabet especially ternary. After a brief study, we discovered some properties of T codes and produced some large codes by 3 different ways of extending some existing good codes.
Resumo:
Most of the commercial and financial data are stored in decimal fonn. Recently, support for decimal arithmetic has received increased attention due to the growing importance in financial analysis, banking, tax calculation, currency conversion, insurance, telephone billing and accounting. Performing decimal arithmetic with systems that do not support decimal computations may give a result with representation error, conversion error, and/or rounding error. In this world of precision, such errors are no more tolerable. The errors can be eliminated and better accuracy can be achieved if decimal computations are done using Decimal Floating Point (DFP) units. But the floating-point arithmetic units in today's general-purpose microprocessors are based on the binary number system, and the decimal computations are done using binary arithmetic. Only few common decimal numbers can be exactly represented in Binary Floating Point (BF P). ln many; cases, the law requires that results generated from financial calculations performed on a computer should exactly match with manual calculations. Currently many applications involving fractional decimal data perform decimal computations either in software or with a combination of software and hardware. The performance can be dramatically improved by complete hardware DFP units and this leads to the design of processors that include DF P hardware.VLSI implementations using same modular building blocks can decrease system design and manufacturing cost. A multiplexer realization is a natural choice from the viewpoint of cost and speed.This thesis focuses on the design and synthesis of efficient decimal MAC (Multiply ACeumulate) architecture for high speed decimal processors based on IEEE Standard for Floating-point Arithmetic (IEEE 754-2008). The research goal is to design and synthesize deeimal'MAC architectures to achieve higher performance.Efficient design methods and architectures are developed for a high performance DFP MAC unit as part of this research.
Resumo:
It is problematic to use standard ontology tools when describing vague domains. Standard ontologies are designed to formally define one view of a domain, and although it is possible to define disagreeing statements, it is not advisable, as the resulting inferences could be incorrect. Two different solutions to the above problem in two different vague domains have been developed and are presented. The first domain is the knowledge base of conversational agents (chatbots). An ontological scripting language has been designed to access ontology data from within chatbot code. The solution developed is based on reifications of user statements. It enables a new layer of logics based on the different views of the users, enabling the body of knowledge to grow automatically. The second domain is competencies and competency frameworks. An ontological framework has been developed to model different competencies using the emergent standards. It enables comparison of competencies using a mix of linguistic logics and descriptive logics. The comparison results are non-binary, therefore not simple yes and no answers, highlighting the vague nature of the comparisons. The solution has been developed with small ontologies which can be added to and modified in order for the competency user to build a total picture that fits the user’s purpose. Finally these two approaches are viewed in the light of how they could aid future work in vague domains, further work in both domains is described and also in other domains such as the semantic web. This demonstrates two different approaches to achieve inferences using standard ontology tools in vague domains.
Resumo:
Pós-graduação em Educação Matemática - IGCE
Resumo:
This paper analyzes concepts of independence and assumptions of convexity in the theory of sets of probability distributions. The starting point is Kyburg and Pittarelli's discussion of "convex Bayesianism" (in particular their proposals concerning E-admissibility, independence, and convexity). The paper offers an organized review of the literature on independence for sets of probability distributions; new results on graphoid properties and on the justification of "strong independence" (using exchangeability) are presented. Finally, the connection between Kyburg and Pittarelli's results and recent developments on the axiomatization of non-binary preferences, and its impact on "complete" independence, are described.
Resumo:
This thesis investigates two distinct research topics. The main topic (Part I) is the computational modelling of cardiomyocytes derived from human stem cells, both embryonic (hESC-CM) and induced-pluripotent (hiPSC-CM). The aim of this research line lies in developing models of the electrophysiology of hESC-CM and hiPSC-CM in order to integrate the available experimental data and getting in-silico models to be used for studying/making new hypotheses/planning experiments on aspects not fully understood yet, such as the maturation process, the functionality of the Ca2+ hangling or why the hESC-CM/hiPSC-CM action potentials (APs) show some differences with respect to APs from adult cardiomyocytes. Chapter I.1 introduces the main concepts about hESC-CMs/hiPSC-CMs, the cardiac AP, and computational modelling. Chapter I.2 presents the hESC-CM AP model, able to simulate the maturation process through two developmental stages, Early and Late, based on experimental and literature data. Chapter I.3 describes the hiPSC-CM AP model, able to simulate the ventricular-like and atrial-like phenotypes. This model was used to assess which currents are responsible for the differences between the ventricular-like AP and the adult ventricular AP. The secondary topic (Part II) consists in the study of texture descriptors for biological image processing. Chapter II.1 provides an overview on important texture descriptors such as Local Binary Pattern or Local Phase Quantization. Moreover the non-binary coding and the multi-threshold approach are here introduced. Chapter II.2 shows that the non-binary coding and the multi-threshold approach improve the classification performance of cellular/sub-cellular part images, taken from six datasets. Chapter II.3 describes the case study of the classification of indirect immunofluorescence images of HEp2 cells, used for the antinuclear antibody clinical test. Finally the general conclusions are reported.
Resumo:
In 2011, there will be an estimated 1,596,670 new cancer cases and 571,950 cancer-related deaths in the US. With the ever-increasing applications of cancer genetics in epidemiology, there is great potential to identify genetic risk factors that would help identify individuals with increased genetic susceptibility to cancer, which could be used to develop interventions or targeted therapies that could hopefully reduce cancer risk and mortality. In this dissertation, I propose to develop a new statistical method to evaluate the role of haplotypes in cancer susceptibility and development. This model will be flexible enough to handle not only haplotypes of any size, but also a variety of covariates. I will then apply this method to three cancer-related data sets (Hodgkin Disease, Glioma, and Lung Cancer). I hypothesize that there is substantial improvement in the estimation of association between haplotypes and disease, with the use of a Bayesian mathematical method to infer haplotypes that uses prior information from known genetics sources. Analysis based on haplotypes using information from publically available genetic sources generally show increased odds ratios and smaller p-values in both the Hodgkin, Glioma, and Lung data sets. For instance, the Bayesian Joint Logistic Model (BJLM) inferred haplotype TC had a substantially higher estimated effect size (OR=12.16, 95% CI = 2.47-90.1 vs. 9.24, 95% CI = 1.81-47.2) and more significant p-value (0.00044 vs. 0.008) for Hodgkin Disease compared to a traditional logistic regression approach. Also, the effect sizes of haplotypes modeled with recessive genetic effects were higher (and had more significant p-values) when analyzed with the BJLM. Full genetic models with haplotype information developed with the BJLM resulted in significantly higher discriminatory power and a significantly higher Net Reclassification Index compared to those developed with haplo.stats for lung cancer. Future analysis for this work could be to incorporate the 1000 Genomes project, which offers a larger selection of SNPs can be incorporated into the information from known genetic sources as well. Other future analysis include testing non-binary outcomes, like the levels of biomarkers that are present in lung cancer (NNK), and extending this analysis to full GWAS studies.
Resumo:
Background: Little research has been conducted to assess the effect of using memory training with school-aged children who were born very preterm. This study aimed to determine whether two types of memory training approaches resulted in an improvement of trained functions and/or a generalization of the training effect to non-trained cognitive domains. Methods: Sixty-eight children born very preterm (7¬-12 years) were randomly allocated to a group undertaking memory strategy training (n=23), working memory training (n=22), or a waiting control group (n=23). Neuropsychological assessment was performed before and immediately after the training or waiting period, and at a six-month follow-up. Results: In both training groups, significant improvement of different memory domains occurred immediately after training (near transfer). Improvement of non-trained arithmetic performance was observed after strategy training (far transfer). At a six-month follow-up assessment, children in both training groups demonstrated better working memory, and their parents rated their memory functions to be better than controls. Performance level before the training was negatively associated with the training gain. Conclusions: These results highlight the importance of cognitive interventions, in particular the teaching of memory strategies, in very preterm-born children at early school age to strengthen cognitive performance and prevent problems at school.
Resumo:
Background: Little research has been conducted to assess the effect of using memory training with school aged children who were born very preterm. This study aimed to determine whether two types of memory training approaches resulted in an improvement of trained functions and/or a generalization of the training effect to non-trained cognitive domains. Methods: Sixty-eight children born very preterm (7-12 years) were randomly allocated to a group undertaking memory strategy training (n=23), working memory training (n=22), or a waiting control group (n=23). Neuropsychological assessment was performed before and immediately after the training or waiting period, and at a six-month follow-up. Results: In both training groups, significant improvement of different memory domains occurred immediately after training (near transfer). Improvement of non-trained arithmetic performance was observed after strategy training (far transfer). At a six-month follow-up assessment, children in both training groups demonstrated better working memory, and their parents rated their memory functions to be better than controls. Performance level before the training was negatively associated with the training gain. Conclusions: These results highlight the importance of cognitive interventions, in particular the teaching of memory strategies, in very preterm-born children at early school age to strengthen cognitive performance and prevent problems at school.
Resumo:
A half-adder and ñxll-adder desing using a new optical processing element is presented. The Optical Processing element is maded using fiber optic, optical couplers and non-linear optical device. This element allow programmability of fourteen difference pair of logical function of two inputs in two outputs. Two optical control signáis of non-binary logic made the choice of the logical function pair obtain in the outputs. By the appropiate selection of the power levels of the optical control signáis, we can configúrate a half-adder and with an small modification a full-adder. Also, a ripple carry adder desing is presented.