279 resultados para Alphabet
Resumo:
The design of a large and reliable DNA codeword library is a key problem in DNA based computing. DNA codes, namely sets of fixed length edit metric codewords over the alphabet {A, C, G, T}, satisfy certain combinatorial constraints with respect to biological and chemical restrictions of DNA strands. The primary constraints that we consider are the reverse--complement constraint and the fixed GC--content constraint, as well as the basic edit distance constraint between codewords. We focus on exploring the theory underlying DNA codes and discuss several approaches to searching for optimal DNA codes. We use Conway's lexicode algorithm and an exhaustive search algorithm to produce provably optimal DNA codes for codes with small parameter values. And a genetic algorithm is proposed to search for some sub--optimal DNA codes with relatively large parameter values, where we can consider their sizes as reasonable lower bounds of DNA codes. Furthermore, we provide tables of bounds on sizes of DNA codes with length from 1 to 9 and minimum distance from 1 to 9.
Resumo:
Research points clearly to the need for all concerned stakeholders to adopt a preventative approach while intervening with children who are at-risk for future reading disabilities. Research has indicated also that a particular sub-group of children at-risk for reading impairments include preschool children with language impairments (Catts, 1993). Preschool children with language impairments may have difficulties with emergent literacy skills - important prerequisite skills necessary for successful formal reading. Only in the past decade have researchers begun to study the effects of emergent literacy intervention on preschool children with language impairments. As such, the current study continues this investigation of how to effectively implement an emergent literacy therapy aimed at supporting preschool children with language impairments. In addition to this, the current study explores emergent literacy intervention within an applied clinical setting. The setting, presents a host of methodological and theoretical challenges - challenges that will advance the field of understanding children within naturalistic settings. This exploratory study included thirty-eight participants who were recruited from Speech Services Niagara, a local preschool speech and language program. Using a between-group pre- and posttest design, this study compared two intervention approaches - an experimental emergent literacy intervention and a traditional language intervention. The experimental intervention was adopted from Read It Again! (Justice, McGinty, Beckman, & Kilday, 2006) and the traditional language intervention was based on the traditional models of language therapy typically used in preschool speech and language models across Ontario. 5 Results indicated that the emergent literacy intervention was superior to the ,t..3>~, ~\., ;./h traditional language therapy in improving the children's alphabet knowledge, print and word awareness and phonological awareness. Moreover, results revealed that children with more severe language impairments require greater support and more explicit instruction than children with moderate language impairments. Another important finding indicated that the effects of the preschool emergent literacy intervention used in this study may not be sustainable as children enter grade one. The implications of this study point to the need to support preschool children with language impairments with intensive emergent literacy intervention that extends beyond preschool into formal educational settings.
Resumo:
Finding large deletion correcting codes is an important issue in coding theory. Many researchers have studied this topic over the years. Varshamov and Tenegolts constructed the Varshamov-Tenengolts codes (VT codes) and Levenshtein showed the Varshamov-Tenengolts codes are perfect binary one-deletion correcting codes in 1992. Tenegolts constructed T codes to handle the non-binary cases. However the T codes are neither optimal nor perfect, which means some progress can be established. Latterly, Bours showed that perfect deletion-correcting codes have a close relationship with design theory. By this approach, Wang and Yin constructed perfect 5-deletion correcting codes of length 7 for large alphabet size. For our research, we focus on how to extend or combinatorially construct large codes with longer length, few deletions and small but non-binary alphabet especially ternary. After a brief study, we discovered some properties of T codes and produced some large codes by 3 different ways of extending some existing good codes.
Resumo:
The purpose of this project was to provide parents with an awareness of the role that they play in their preschool children's literacy and reading development and to create a practical handbook that parents can use to teach early literacy and reading skills to their preschool children in their home environment. The handbook was created in response to the literature that confirmed that the children benefit from developing emergent literacy skills before they enter school in kindergarten or grade 1. In addition to the information gathered from the academic literature, needs assessments were conducted in order to hear perspectives from multiple stakeholders involved in the context of this project. The needs assessment questionnaires were conducted with 4 Ontario certified grade 1 and 2 teachers, and 4 parents with preschool children or children in kindergarten or grade 1. Data collected from these participants highlighted the needs of parents and were used to create a comprehensive handbook that will hopefully be accessible and useful to a wide parent audience. The results of the research project indicated that parents would, in fact, benefit from having access to a resource such as this handbook to assist in teaching the 4 components of emergent literacy to their preschool children––oral language, alphabet knowledge, phonological awareness, and print awareness––to their preschool children.
Resumo:
Between 1700 and 1850, per-capita income doubled in Europe while falling in the rest of Eurasia. Neither geography nor economic institutions can explain this sudden divergence. Here the consequences of differences in communications technology are examined. For the first time, there appeared in Europe a combination of a standardized medium (national vernaculars with a phonetic alphabet) and a non-standardized message (competing religious, political and scientific ideas). The result was an unprecedented fall in the cost of combining ideas and burst of productivity-raising innovation. Elsewhere, decreasing standardization of the medium and increasing standardization of the message blocked innovation.
Resumo:
Affiliation: Département de Biochimie, Université de Montréal
Resumo:
L'écriture dans une langue « d'adoption » est un phénomène littéraire de plus en plus courant. À ce jour, la contextualisation qui en est faite gravite principalement autour de l'identitaire et de l'exil, et néglige une approche moins biographique, plus attentive à ce que l'on pourrait appeler une poétique du bilinguisme, en filiation avec la philosophie du langage de Walter Benjamin. Le concept d'une langue pure, résonnant dans le silence de chacune des langues comme une présence antérieure, peut permettre d'accéder à cette ouverture des mots, et contribuer à réaliser leur simultané pouvoir de dévoilement et de dérobade, comme une invitation à l'écoute attentive de ce qui se dit à travers eux. de Silvia Baron Supervielle, écrivaine et traductrice francophone d'origine argentine, témoigne de l'extériorité inhérente aux langues. L'analyse du mémoire essaye de manière suggestive, par un agencement de concepts philosophiques complémentaires, de rendre palpable cette voix singulière dans trois publications de genres différents : réflexions philosophiques sur les langues (l'alphabet du feu), journal de lectrice et de poète (Le pays de l'écriture), poème en prose (La frontière). Il s'agit moins de formuler une théorie de l'entre- deux-langues que de montrer l'ouverture du verbe générée par l'écriture d'une langue à l'autre.
Resumo:
Le rôle du parent est important dans le développement de la compétence en lecture de jeunes enfants et lire à son enfant est une pratique de littératie familiale fortement encouragée par la société. Cette étude a pour objectif de décrire cet accompagnement parental notamment en lien avec les stratégies de compréhension utilisées entre un parent et son enfant lors de la lecture à voix haute. Nous avons observé 10 parents lire un abécédaire, un texte narratif avec intrigue, un texte narratif sans intrigue et un texte informatif à leur enfant de cinq ans. Il s’avère que les stratégies utilisées par les parents et leurs enfants diffèrent selon le genre de texte. Les élèves ayant de faibles résultats (reconnaissance des lettres et de leurs sons, rappel du texte, compréhension du vocabulaire réceptif et de la morphosyntaxe) utilisent également moins de stratégies de compréhension lors de la lecture à voix haute que les enfants présentant de meilleurs résultats. Nous avons également vérifié l’étayage offert par les parents d’enfants présentant de bonnes et de faibles compétences en lecture. Ces deux groupes de parents se distinguent par la qualité et la fréquence de l’utilisation des stratégies de compréhension. En effet, nous remarquons que les parents qui guident leurs enfants dans l’utilisation des stratégies de compréhension sont davantage associés aux enfants démontrant une bonne compétence en lecture. Finalement, nous avons aussi vérifié les pratiques de littératie familiale (temps d’exposition et accessibilité à la lecture, modélisation par les membres de la famille, attitude des parents envers la lecture et mise en place d’activité favorisant la conscience phonologique de l’enfant). Seule la mise sur pied d’activités favorisant la conscience phonologique a pu être liée au rendement des enfants.
Resumo:
Thèse en cotutelle Université de Montréal et Université Paris Diderot-Paris 7
Resumo:
This thesis comprises five chapters including the introductory chapter. This includes a brief introduction and basic definitions of fuzzy set theory and its applications, semigroup action on sets, finite semigroup theory, its application in automata theory along with references which are used in this thesis. In the second chapter we defined an S-fuzzy subset of X with the extension of the notion of semigroup action of S on X to semigroup action of S on to a fuzzy subset of X using Zadeh's maximal extension principal and proved some results based on this. We also defined an S-fuzzy morphism between two S-fuzzy subsets of X and they together form a category S FSETX. Some general properties and special objects in this category are studied and finally proved that S SET and S FSET are categorically equivalent. Further we tried to generalize this concept to the action of a fuzzy semigroup on fuzzy subsets. As an application, using the above idea, we convert a _nite state automaton to a finite fuzzy state automaton. A classical automata determine whether a word is accepted by the automaton where as a _nite fuzzy state automaton determine the degree of acceptance of the word by the automaton. 1.5. Summary of the Thesis 17 In the third chapter we de_ne regular and inverse fuzzy automata, its construction, and prove that the corresponding transition monoids are regular and inverse monoids respectively. The languages accepted by an inverse fuzzy automata is an inverse fuzzy language and we give a characterization of an inverse fuzzy language. We study some of its algebraic properties and prove that the collection IFL on an alphabet does not form a variety since it is not closed under inverse homomorphic images. We also prove some results based on the fact that a semigroup is inverse if and only if idempotents commute and every L-class or R-class contains a unique idempotent. Fourth chapter includes a study of the structure of the automorphism group of a deterministic faithful inverse fuzzy automaton and prove that it is equal to a subgroup of the inverse monoid of all one-one partial fuzzy transformations on the state set. In the fifth chapter we define min-weighted and max-weighted power automata study some of its algebraic properties and prove that a fuzzy automaton and the fuzzy power automata associated with it have the same transition monoids. The thesis ends with a conclusion of the work done and the scope of further study.
Resumo:
The restarting automaton is a restricted model of computation that was introduced by Jancar et al. to model the so-called analysis by reduction, which is a technique used in linguistics to analyze sentences of natural languages. The most general models of restarting automata make use of auxiliary symbols in their rewrite operations, although this ability does not directly correspond to any aspect of the analysis by reduction. Here we put restrictions on the way in which restarting automata use auxiliary symbols, and we investigate the influence of these restrictions on their expressive power. In fact, we consider two types of restrictions. First, we consider the number of auxiliary symbols in the tape alphabet of a restarting automaton as a measure of its descriptional complexity. Secondly, we consider the number of occurrences of auxiliary symbols on the tape as a dynamic complexity measure. We establish some lower and upper bounds with respect to these complexity measures concerning the ability of restarting automata to recognize the (deterministic) context-free languages and some of their subclasses.
Resumo:
Restarting automata can be seen as analytical variants of classical automata as well as of regulated rewriting systems. We study a measure for the degree of nondeterminism of (context-free) languages in terms of deterministic restarting automata that are (strongly) lexicalized. This measure is based on the number of auxiliary symbols (categories) used for recognizing a language as the projection of its characteristic language onto its input alphabet. This type of recognition is typical for analysis by reduction, a method used in linguistics for the creation and verification of formal descriptions of natural languages. Our main results establish a hierarchy of classes of context-free languages and two hierarchies of classes of non-context-free languages that are based on the expansion factor of a language.
Resumo:
A língua gestual (LG) é a língua natural da pessoa surda, sendo utilizada como forma de expressão e comunicação da comunidade surda de um determinado país. Porém, é de todo impossível escrever estas línguas através de um alfabeto comum como o da Língua Portuguesa (LP). Em 1974, na Dinamarca, Valerie Sutton criou o SignWriting (SW), um sistema de escrita das línguas gestuais, contrariando assim a ideia de que as línguas espaço-visuais não poderiam ter uma representação gráfica. Para o surgimento deste sistema foram fundamentais os estudos pioneiros de William Stokoe que reconheceram o estatuto linguístico das línguas gestuais, atribuindo-lhes propriedades inerentes a uma língua, como por exemplo a arbitrariedade e convencionalidade. Neste trabalho apresentamos o SW, sistema de escrita das línguas gestuais já utilizado noutros países, e questionamos se é exequível e profícua a sua adaptação à língua gestual portuguesa (LGP). Nesse sentido, concretizamos a escrita da LGP com base em áreas vocabulares distintas e presentes no programa curricular do ensino da LGP. Por último, efetivamos tal proposta através de um modelo de ação de formação em SW.
Resumo:
O autor trata da questão das relações culturais com o Vietname (também chamado Cochinchina) desde o século XVI e particularmente da introdução do alfabeto latino por vários membros da Companhia de Jesus - entre os quais portugueses -, numa fase que precedeu as relações de tipo colonial que se viriam a desenvolver posteriormente, em particular com a França. Sublinha ainda como é importante manter a memória desse diálogo Ocidente-Oriente não obstante a atual situação pós-imperial e pós-colonial.