851 resultados para Coding Scheme
Resumo:
Our objective was to clone, express and characterize adult Dermatophagoides farinae group 1 (Der f 1) allergens to further produce recombinant allergens for future clinical applications in order to eliminate side reactions from crude extracts of mites. Based on GenBank data, we designed primers and amplified the cDNA fragment coding for Der f 1 by nested-PCR. After purification and recovery, the cDNA fragment was cloned into the pMD19-T vector. The fragment was then sequenced, subcloned into the plasmid pET28a(+), expressed in Escherichia coli BL21 and identified by Western blotting. The cDNA coding for Der f 1 was cloned, sequenced and expressed successfully. Sequence analysis showed the presence of an open reading frame containing 966 bp that encodes a protein of 321 amino acids. Interestingly, homology analysis showed that the Der p 1 shared more than 87% identity in amino acid sequence with Eur m 1 but only 80% with Der f 1. Furthermore, phylogenetic analyses suggested that D. pteronyssinus was evolutionarily closer to Euroglyphus maynei than to D. farinae, even though D. pteronyssinus and D. farinae belong to the same Dermatophagoides genus. A total of three cysteine peptidase active sites were found in the predicted amino acid sequence, including 127-138 (QGGCGSCWAFSG), 267-277 (NYHAVNIVGYG) and 284-303 (YWIVRNSWDTTWGDSGYGYF). Moreover, secondary structure analysis revealed that Der f 1 contained an a helix (33.96%), an extended strand (17.13%), a ß turn (5.61%), and a random coil (43.30%). A simple three-dimensional model of this protein was constructed using a Swiss-model server. The cDNA coding for Der f 1 was cloned, sequenced and expressed successfully. Alignment and phylogenetic analysis suggests that D. pteronyssinus is evolutionarily more similar to E. maynei than to D. farinae.
Resumo:
The quantitative component of this study examined the effect of computerassisted instruction (CAI) on science problem-solving performance, as well as the significance of logical reasoning ability to this relationship. I had the dual role of researcher and teacher, as I conducted the study with 84 grade seven students to whom I simultaneously taught science on a rotary-basis. A two-treatment research design using this sample of convenience allowed for a comparison between the problem-solving performance of a CAI treatment group (n = 46) versus a laboratory-based control group (n = 38). Science problem-solving performance was measured by a pretest and posttest that I developed for this study. The validity of these tests was addressed through critical discussions with faculty members, colleagues, as well as through feedback gained in a pilot study. High reliability was revealed between the pretest and the posttest; in this way, students who tended to score high on the pretest also tended to score high on the posttest. Interrater reliability was found to be high for 30 randomly-selected test responses which were scored independently by two raters (i.e., myself and my faculty advisor). Results indicated that the form of computer-assisted instruction (CAI) used in this study did not significantly improve students' problem-solving performance. Logical reasoning ability was measured by an abbreviated version of the Group Assessment of Lx)gical Thinking (GALT). Logical reasoning ability was found to be correlated to problem-solving performance in that, students with high logical reasoning ability tended to do better on the problem-solving tests and vice versa. However, no significant difference was observed in problem-solving improvement, in the laboratory-based instruction group versus the CAI group, for students varying in level of logical reasoning ability.Insignificant trends were noted in results obtained from students of high logical reasoning ability, but require further study. It was acknowledged that conclusions drawn from the quantitative component of this study were limited, as further modifications of the tests were recommended, as well as the use of a larger sample size. The purpose of the qualitative component of the study was to provide a detailed description ofmy thesis research process as a Brock University Master of Education student. My research journal notes served as the data base for open coding analysis. This analysis revealed six main themes which best described my research experience: research interests, practical considerations, research design, research analysis, development of the problem-solving tests, and scoring scheme development. These important areas ofmy thesis research experience were recounted in the form of a personal narrative. It was noted that the research process was a form of problem solving in itself, as I made use of several problem-solving strategies to achieve desired thesis outcomes.
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
The present set of experiments was designed to investigate the organization and refmement of young children's face space. Past research has demonstrated that adults encode individual faces in reference to a distinct face prototype that represents the average of all faces ever encountered. The prototype is not a static abstracted norm but rather a malleable face average that is continuously updated by experience (Valentine, 1991); for example, following prolonged viewing of faces with compressed features (a technique referred to as adaptation), adults rate similarly distorted faces as more normal and more attractive (simple attractiveness aftereffects). Recent studies have shown that adults possess category-specific face prototypes (e.g., based on race, sex). After viewing faces from two categories (e.g., Caucasian/Chinese) that are distorted in opposite directions, adults' attractiveness ratings simultaneously shift in opposite directions (opposing aftereffects). The current series of studies used a child-friendly method to examine whether, like adults, 5- and 8-year-old children show evidence for category-contingent opposing aftereffects. Participants were shown a computerized storybook in which Caucasian and Chinese children's faces were distorted in opposite directions (expanded and compressed). Both before and after adaptation (i.e., reading the storybook), participants judged the normality/attractiveness of a small number of expanded, compressed, and undistorted Caucasian and Chinese faces. The method was first validated by testing adults (Experiment I ) and was then refined in order to test 8- (Experiment 2) and 5-yearold (Experiment 4a) children. Five-year-olds (our youngest age group) were also tested in a simple aftereffects paradigm (Experiment 3) and with male and female faces distorted in opposite directions (Experiment 4b). The current research is the first to demonstrate evidence for simple attractiveness aftereffects in children as young as 5, thereby indicating that similar to adults, 5-year-olds utilize norm-based coding. Furthermore, this research provides evidence for racecontingent opposing aftereffects in both 5- and 8-year-olds; however, the opposing aftereffects demonstrated by 5-year-olds were driven largely by simple aftereffects for Caucasian faces. The lack of simple aftereffects for Chinese faces in 5-year-olds may be reflective of young children's limited experience with other-race faces and suggests that children's face space undergoes a period of increasing differentiation over time with respect to race. Lastly, we found no evidence for sex -contingent opposing aftereffects in 5-year-olds, which suggests that young children do not rely on a fully adult-like face space even for highly salient face categories (i.e., male/female) with which they have comparable levels of experience.
Resumo:
Tesis (Master of Science in Electrical Engineering) UANL, 2014.
Resumo:
Ce mémoire vise à recenser les avantages et les inconvénients de l'utilisation du langage de programmation fonctionnel dynamique Scheme pour le développement de jeux vidéo. Pour ce faire, la méthode utilisée est d'abord basée sur une approche plus théorique. En effet, une étude des besoins au niveau de la programmation exprimés par ce type de développement, ainsi qu'une description détaillant les fonctionnalités du langage Scheme pertinentes au développement de jeux vidéo sont données afin de bien mettre en contexte le sujet. Par la suite, une approche pratique est utilisée en effectuant le développement de deux jeux vidéo de complexités croissantes: Space Invaders et Lode Runner. Le développement de ces jeux vidéo a mené à l'extension du langage Scheme par plusieurs langages spécifiques au domaine et bibliothèques, dont notamment un système de programmation orienté objets et un système de coroutines. L'expérience acquise par le développement de ces jeux est finalement comparée à celle d'autres développeurs de jeux vidéo de l'industrie qui ont utilisé Scheme pour la création de titres commerciaux. En résumé, l'utilisation de ce langage a permis d'atteindre un haut niveau d'abstraction favorisant la modularité des jeux développés sans affecter les performances de ces derniers.
Resumo:
Dans le but d’optimiser la représentation en mémoire des enregistrements Scheme dans le compilateur Gambit, nous avons introduit dans celui-ci un système d’annotations de type et des vecteurs contenant une représentation abrégée des enregistrements. Ces derniers omettent la référence vers le descripteur de type et l’entête habituellement présents sur chaque enregistrement et utilisent plutôt un arbre de typage couvrant toute la mémoire pour retrouver le vecteur contenant une référence. L’implémentation de ces nouvelles fonctionnalités se fait par le biais de changements au runtime de Gambit. Nous introduisons de nouvelles primitives au langage et modifions l’architecture existante pour gérer correctement les nouveaux types de données. On doit modifier le garbage collector pour prendre en compte des enregistrements contenants des valeurs hétérogènes à alignements irréguliers, et l’existence de références contenues dans d’autres objets. La gestion de l’arbre de typage doit aussi être faite automatiquement. Nous conduisons ensuite une série de tests de performance visant à déterminer si des gains sont possibles avec ces nouvelles primitives. On constate une amélioration majeure de performance au niveau de l’allocation et du comportement du gc pour les enregistrements typés de grande taille et des vecteurs d’enregistrements typés ou non. De légers surcoûts sont toutefois encourus lors des accès aux champs et, dans le cas des vecteurs d’enregistrements, au descripteur de type.
Resumo:
n the recent years protection of information in digital form is becoming more important. Image and video encryption has applications in various fields including Internet communications, multimedia systems, medical imaging, Tele-medicine and military communications. During storage as well as in transmission, the multimedia information is being exposed to unauthorized entities unless otherwise adequate security measures are built around the information system. There are many kinds of security threats during the transmission of vital classified information through insecure communication channels. Various encryption schemes are available today to deal with information security issues. Data encryption is widely used to protect sensitive data against the security threat in the form of “attack on confidentiality”. Secure transmission of information through insecure communication channels also requires encryption at the sending side and decryption at the receiving side. Encryption of large text message and image takes time before they can be transmitted, causing considerable delay in successive transmission of information in real-time. In order to minimize the latency, efficient encryption algorithms are needed. An encryption procedure with adequate security and high throughput is sought in multimedia encryption applications. Traditional symmetric key block ciphers like Data Encryption Standard (DES), Advanced Encryption Standard (AES) and Escrowed Encryption Standard (EES) are not efficient when the data size is large. With the availability of fast computing tools and communication networks at relatively lower costs today, these encryption standards appear to be not as fast as one would like. High throughput encryption and decryption are becoming increasingly important in the area of high-speed networking. Fast encryption algorithms are needed in these days for high-speed secure communication of multimedia data. It has been shown that public key algorithms are not a substitute for symmetric-key algorithms. Public key algorithms are slow, whereas symmetric key algorithms generally run much faster. Also, public key systems are vulnerable to chosen plaintext attack. In this research work, a fast symmetric key encryption scheme, entitled “Matrix Array Symmetric Key (MASK) encryption” based on matrix and array manipulations has been conceived and developed. Fast conversion has been achieved with the use of matrix table look-up substitution, array based transposition and circular shift operations that are performed in the algorithm. MASK encryption is a new concept in symmetric key cryptography. It employs matrix and array manipulation technique using secret information and data values. It is a block cipher operated on plain text message (or image) blocks of 128 bits using a secret key of size 128 bits producing cipher text message (or cipher image) blocks of the same size. This cipher has two advantages over traditional ciphers. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the key avalanche effect produced in the ciphertext output is better than that of AES.
Resumo:
This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder
Resumo:
Health insurance has become a necessity for the common man, next to food, clothing and shelter. The financing of health expense is either catastrophic or sometimes even frequently contracted illnesses, is a major cause of mental agony for the common man. The cost of care may sometimes result in the complete erosion of the family savings or may even lead to indebtedness as many studies on causes of rural indebtedness bear testimony (Jayalakshmi, 2006). A suitable cover by way of health insurance is all that is required to cope with such situations. Health care insurance rightly provides the mechanism for both individuals and families to mitigate the financial burden of medical expenses in the present context. Hence a well designed affordable health insurance policy is the need of the hour.Therefore, it is very significant to study the extent to which the beneficiaries in Kerala make use of the benefits provided by a social health insurance scheme like RSBY-CHIS. Based on the above pertinent points, this study assumes national relevance even though the geographical area of the study is limited to two districts of Kerala. The findings of the study will bring forth valuable inputs on the services availed by the beneficiaries of RSBYCHIS and take appropriate measures to improve the effectiveness of the scheme whereby maximum quality benefit could be availed by the poorest of the poor and develop the scheme as a real dawn of the new era of health for them
Resumo:
In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.
Resumo:
Clustering schemes improve energy efficiency of wireless sensor networks. The inclusion of mobility as a new criterion for the cluster creation and maintenance adds new challenges for these clustering schemes. Cluster formation and cluster head selection is done on a stochastic basis for most of the algorithms. In this paper we introduce a cluster formation and routing algorithm based on a mobility factor. The proposed algorithm is compared with LEACH-M protocol based on metrics viz. number of cluster head transitions, average residual energy, number of alive nodes and number of messages lost
Resumo:
Speech signals are one of the most important means of communication among the human beings. In this paper, a comparative study of two feature extraction techniques are carried out for recognizing speaker independent spoken isolated words. First one is a hybrid approach with Linear Predictive Coding (LPC) and Artificial Neural Networks (ANN) and the second method uses a combination of Wavelet Packet Decomposition (WPD) and Artificial Neural Networks. Voice signals are sampled directly from the microphone and then they are processed using these two techniques for extracting the features. Words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. Training, testing and pattern recognition are performed using Artificial Neural Networks. Back propagation method is used to train the ANN. The proposed method is implemented for 50 speakers uttering 20 isolated words each. Both the methods produce good recognition accuracy. But Wavelet Packet Decomposition is found to be more suitable for recognizing speech because of its multi-resolution characteristics and efficient time frequency localizations
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained