938 resultados para Generation from examples
Resumo:
A class of twenty-two grade one children was tested to determine their reading levels using the Stanford Diagnostic Reading Achievement Test. Based on these results and teacher input the students were paired according to reading ability. The students ages ranged from six years four months to seven years four months at the commencement of the study. Eleven children were assigned to the language experience group and their partners became the text group. Each member of the language experience group generated a list of eight to be learned words. The treatment consisted of exposing the student to a given word three times per session for ten sessions, over a period of five days. The dependent variables consisted of word identification speed, word identification accuracy, and word recognition accuracy. Each member of the text group followed the same procedure using his/her partner's list of words. Upon completion of this training, the entire process was repeated with members of the text group from the first part becoming members of the language experience group and vice versa. The results suggest that generally speaking language experience words are identified faster than text words but that there is no difference in the rate at which these words are learned. Language experience words may be identified faster because the auditory-semantic information is more readily available in them than in text words. The rate of learning in both types of words, however, may be dictated by the orthography of the to be learned word.
Resumo:
This study explores the stories and experiences of second-generation Portuguese Canadian secondary school students in Southern Ontario, Canada. The purpose of this research was to understand the educational experiences of students, specifically the successes, challenges, and struggles that the participants faced within the education system. Questions were also asked about identity issues and how participants perceived their identities influencing their educational experiences. Six Portuguese Canadian students in grades 9 to 11 were interviewed twice. The interviews ranged from 45 minutes to 90 minutes in length. Data analysis of qualitative, open-ended interviews, research journals, field notes and curricular documents yielded understandings about the participants' experiences and challenges in the education system. Eight themes emerged from data that explored the realities of everyday life for second-generatiop Portuguese Canadian students. These themes include: influences of part-time work on schooling, parental involvement, the teacher is key, challenges and barriers, the importance of peers, Portuguese Canadian identity, lack of focus on identity in curricul:um content, and the dropout problem. Recommendations in this study include the need for more community-based programs to assist students. Furthermore, teachers are encouraged to utilize strategies and curriculum resources that engage learners and integrate their histories and identities. Educators are encouraged to question power dynamics both inside and outside the school system. There is also a need for further research with Portuguese Canadian students who are struggling in the education system as well as an examination of the number of hours that students work.
Resumo:
L'utilisation des méthodes formelles est de plus en plus courante dans le développement logiciel, et les systèmes de types sont la méthode formelle qui a le plus de succès. L'avancement des méthodes formelles présente de nouveaux défis, ainsi que de nouvelles opportunités. L'un des défis est d'assurer qu'un compilateur préserve la sémantique des programmes, de sorte que les propriétés que l'on garantit à propos de son code source s'appliquent également au code exécutable. Cette thèse présente un compilateur qui traduit un langage fonctionnel d'ordre supérieur avec polymorphisme vers un langage assembleur typé, dont la propriété principale est que la préservation des types est vérifiée de manière automatisée, à l'aide d'annotations de types sur le code du compilateur. Notre compilateur implante les transformations de code essentielles pour un langage fonctionnel d'ordre supérieur, nommément une conversion CPS, une conversion des fermetures et une génération de code. Nous présentons les détails des représentation fortement typées des langages intermédiaires, et les contraintes qu'elles imposent sur l'implantation des transformations de code. Notre objectif est de garantir la préservation des types avec un minimum d'annotations, et sans compromettre les qualités générales de modularité et de lisibilité du code du compilateur. Cet objectif est atteint en grande partie dans le traitement des fonctionnalités de base du langage (les «types simples»), contrairement au traitement du polymorphisme qui demande encore un travail substantiel pour satisfaire la vérification de type.
Resumo:
Le réalisme des images en infographie exige de créer des objets (ou des scènes) de plus en plus complexes, ce qui entraîne des coûts considérables. La modélisation procédurale peut aider à automatiser le processus de création, à simplifier le processus de modification ou à générer de multiples variantes d'une instance d'objet. Cependant même si plusieurs méthodes procédurales existent, aucune méthode unique permet de créer tous les types d'objets complexes, dont en particulier un édifice complet. Les travaux réalisés dans le cadre de cette thèse proposent deux solutions au problème de la modélisation procédurale: une solution au niveau de la géométrie de base, et l’autre sous forme d'un système général adapté à la modélisation des objets complexes. Premièrement, nous présentons le bloc, une nouvelle primitive de modélisation simple et générale, basée sur une forme cubique généralisée. Les blocs sont disposés et connectés entre eux pour constituer la forme de base des objets, à partir de laquelle est extrait un maillage de contrôle pouvant produire des arêtes lisses et vives. La nature volumétrique des blocs permet une spécification simple de la topologie, ainsi que le support des opérations de CSG entre les blocs. La paramétrisation de la surface, héritée des faces des blocs, fournit un soutien pour les textures et les fonctions de déplacements afin d'appliquer des détails de surface. Une variété d'exemples illustrent la généralité des blocs dans des contextes de modélisation à la fois interactive et procédurale. Deuxièmement, nous présentons un nouveau système de modélisation procédurale qui unifie diverses techniques dans un cadre commun. Notre système repose sur le concept de composants pour définir spatialement et sémantiquement divers éléments. À travers une série de déclarations successives exécutées sur un sous-ensemble de composants obtenus à l'aide de requêtes, nous créons un arbre de composants définissant ultimement un objet dont la géométrie est générée à l'aide des blocs. Nous avons appliqué notre concept de modélisation par composants à la génération d'édifices complets, avec intérieurs et extérieurs cohérents. Ce nouveau système s'avère général et bien adapté pour le partionnement des espaces, l'insertion d'ouvertures (portes et fenêtres), l'intégration d'escaliers, la décoration de façades et de murs, l'agencement de meubles, et diverses autres opérations nécessaires lors de la construction d'un édifice complet.
Resumo:
Dans ce mémoire, nous nous pencherons tout particulièrement sur une primitive cryptographique connue sous le nom de partage de secret. Nous explorerons autant le domaine classique que le domaine quantique de ces primitives, couronnant notre étude par la présentation d’un nouveau protocole de partage de secret quantique nécessitant un nombre minimal de parts quantiques c.-à-d. une seule part quantique par participant. L’ouverture de notre étude se fera par la présentation dans le chapitre préliminaire d’un survol des notions mathématiques sous-jacentes à la théorie de l’information quantique ayant pour but primaire d’établir la notation utilisée dans ce manuscrit, ainsi que la présentation d’un précis des propriétés mathématique de l’état de Greenberger-Horne-Zeilinger (GHZ) fréquemment utilisé dans les domaines quantiques de la cryptographie et des jeux de la communication. Mais, comme nous l’avons mentionné plus haut, c’est le domaine cryptographique qui restera le point focal de cette étude. Dans le second chapitre, nous nous intéresserons à la théorie des codes correcteurs d’erreurs classiques et quantiques qui seront à leur tour d’extrême importances lors de l’introduction de la théorie quantique du partage de secret dans le chapitre suivant. Dans la première partie du troisième chapitre, nous nous concentrerons sur le domaine classique du partage de secret en présentant un cadre théorique général portant sur la construction de ces primitives illustrant tout au long les concepts introduits par des exemples présentés pour leurs intérêts autant historiques que pédagogiques. Ceci préparera le chemin pour notre exposé sur la théorie quantique du partage de secret qui sera le focus de la seconde partie de ce même chapitre. Nous présenterons alors les théorèmes et définitions les plus généraux connus à date portant sur la construction de ces primitives en portant un intérêt particulier au partage quantique à seuil. Nous montrerons le lien étroit entre la théorie quantique des codes correcteurs d’erreurs et celle du partage de secret. Ce lien est si étroit que l’on considère les codes correcteurs d’erreurs quantiques étaient de plus proches analogues aux partages de secrets quantiques que ne leur étaient les codes de partage de secrets classiques. Finalement, nous présenterons un de nos trois résultats parus dans A. Broadbent, P.-R. Chouha, A. Tapp (2009); un protocole sécuritaire et minimal de partage de secret quantique a seuil (les deux autres résultats dont nous traiterons pas ici portent sur la complexité de la communication et sur la simulation classique de l’état de GHZ).
Resumo:
INTRODUCTION: Emerging evidence indicates that nitric oxide (NO), which is increased in osteoarthritic (OA) cartilage, plays a role in 4-hydroxynonenal (HNE) generation through peroxynitrite formation. HNE is considered as the most reactive product of lipid peroxidation (LPO). We have previously reported that HNE levels in synovial fluids are more elevated in knees of OA patients compared to healthy individuals. We also demonstrated that HNE induces a panoply of inflammatory and catabolic mediators known for their implication in OA cartilage degradation. The aim of the present study was to investigate the ability of inducible NO synthase (iNOS) inhibitor, L-NIL (L-N6-(L-Iminoethyl)Lysine), to prevent HNE generation through NO inhibition in human OA chondrocytes. METHOD: Cells and cartilage explants were treated with or without either an NO generator (SIN or interleukin 1beta (IL-1β)) or HNE in absence or presence of L-NIL. Protein expression of both iNOS and free-radical-generating NOX subunit p47 (phox) were investigated by western blot. iNOS mRNA detection was measured by real-time RT-PCR. HNE production was analysed by ELISA, Western blot and immunohistochemistry. S-nitrosylated proteins were evaluated by Western Blot. Prostaglandin E2 (PGE2) and metalloproteinase 13 (MMP-13) levels as well as glutathione S-transferase (GST) activity were each assessed with commercial kits. NO release was determined using improved Griess method. Reactive oxygen species (ROS) generation was revealed using fluorescent microscopy with the use of commercial kits. RESULTS: L-NIL prevented IL-1β-induced NO release, iNOS expression at protein and mRNA levels, S-nitrosylated proteins and HNE in a dose dependent manner after 24h of incubation. Interestingly, we revealed that L-NIL abolished IL-1β-induced NOX component p47phox as well as ROS release. The HNE-induced PGE2 release and both cyclooxygenase-2 (COX-2) and MMP-13 expression were significantly reduced by L-NIL addition. Furthermore, L-NIL blocked the IL-1β induced inactivation of GST, an HNE-metabolizing enzyme. Also, L-NIL prevented HNE induced cell death at cytotoxic levels. CONCLUSION: Altogether, our findings support a beneficial effect of L-NIL in OA by preventing LPO process in NO-dependent and/or independent mechanisms.
Resumo:
Série de l'Observatoire des fédérations
Resumo:
Corteo is a program that implements Monte Carlo (MC) method to simulate ion beam analysis (IBA) spectra of several techniques by following the ions trajectory until a sufficiently large fraction of them reach the detector to generate a spectrum. Hence, it fully accounts for effects such as multiple scattering (MS). Here, a version of Corteo is presented where the target can be a 2D or 3D image. This image can be derived from micrographs where the different compounds are identified, therefore bringing extra information into the solution of an IBA spectrum, and potentially significantly constraining the solution. The image intrinsically includes many details such as the actual surface or interfacial roughness, or actual nanostructures shape and distribution. This can for example lead to the unambiguous identification of structures stoichiometry in a layer, or at least to better constraints on their composition. Because MC computes in details the trajectory of the ions, it simulates accurately many of its aspects such as ions coming back into the target after leaving it (re-entry), as well as going through a variety of nanostructures shapes and orientations. We show how, for example, as the ions angle of incidence becomes shallower than the inclination distribution of a rough surface, this process tends to make the effective roughness smaller in a comparable 1D simulation (i.e. narrower thickness distribution in a comparable slab simulation). Also, in ordered nanostructures, target re-entry can lead to replications of a peak in a spectrum. In addition, bitmap description of the target can be used to simulate depth profiles such as those resulting from ion implantation, diffusion, and intermixing. Other improvements to Corteo include the possibility to interpolate the cross-section in angle-energy tables, and the generation of energy-depth maps.
Resumo:
The measurement of global precipitation is of great importance in climate modeling since the release of latent heat associated with tropical convection is one of the pricipal driving mechanisms of atmospheric circulation.Knowledge of the larger-scale precipitation field also has important potential applications in the generation of initial conditions for numerical weather prediction models Knowledge of the relationship between rainfall intensity and kinetic energy, and its variations in time and space is important for erosion prediction. Vegetation on earth also greatly depends on the total amount of rainfall as well as the drop size distribution (DSD) in rainfall.While methods using visible,infrared, and microwave radiometer data have been shown to yield useful estimates of precipitation, validation of these products for the open ocean has been hampered by the limited amount of surface rainfall measurements available for accurate assessement, especially for the tropical oceans.Surface rain fall measurements(often called the ground truth)are carried out by rain gauges working on various principles like weighing type,tipping bucket,capacitive type and so on.The acoustic technique is yet another promising method of rain parameter measurement that has many advantages. The basic principle of acoustic method is that the droplets falling in water produce underwater sound with distinct features, using which the rainfall parameters can be computed. The acoustic technique can also be used for developing a low cost and accurate device for automatic measurement of rainfall rate and kinetic energy of rain.especially suitable for telemetry applications. This technique can also be utilized to develop a low cost Disdrometer that finds application in rainfall analysis as well as in calibration of nozzles and sprinklers. This thesis is divided into the following 7 chapters, which describes the methodology adopted, the results obtained and the conclusions arrived at.
Resumo:
Laser ablation of graphite has been carried out using 1.06mm radiation from a Q-switched Nd:YAG laser and the time of flight distribution of molecular C2 present in the resultant plasma is investigated in terms of distance from the target as well as laser fluences employing time resolved spectroscopic technique. At low laser fluences the intensities of the emission lines from C2 exhibit only single peak structure while beyond a threshold laser fluence, emission from C2 shows a twin peak distribution in time. The occurrence of the faster velocity component at higher laser fluences is explained as due to species generated from recombination processes while the delayed peak is attributed to dissociation of higher carbon clusters resulting in the generation of C2 molecule. Analysis of measured data provides a fairly complete picture of the evolution and dynamics of C2 species in the laser induced plasma from graphite.
Resumo:
In this thesis we have studied a few models involving self-generation of priorities. Priority queues have been extensively discussed in literature. However, these are situations involving priority assigned to (or possessed by) customers at the time of their arrival. Nevertheless, customers generating into priority is a common phenomena. Such situations especially arise at a physicians clinic, aircrafts hovering over airport running out of fuel but waiting for clearance to land and in several communication systems. Quantification of these are very little seen in literature except for those cited in some of the work indicated in the introduction. Our attempt is to quantify a few of such problems. In doing so, we have also generalized the classical priority queues by introducing priority generation ( going to higher priorities and during waiting). Systematically we have proceeded from single server queue to multi server queue. We also introduced customers with repeated attempts (retrial) generating priorities. All models that were analyzed in this thesis involve nonpreemptive service. Since the models are not analytically tractable, a large number of numerical illustrations were produced in each chapter to get a feel about the working of the systems.
Resumo:
Wind energy has emerged as a major sustainable source of energy.The efficiency of wind power generation by wind mills has improved a lot during the last three decades.There is still further scope for maximising the conversion of wind energy into mechanical energy.In this context,the wind turbine rotor dynamics has great significance.The present work aims at a comprehensive study of the Horizontal Axis Wind Turbine (HAWT) aerodynamics by numerically solving the fluid dynamic equations with the help of a finite-volume Navier-Stokes CFD solver.As a more general goal,the study aims at providing the capabilities of modern numerical techniques for the complex fluid dynamic problems of HAWT.The main purpose is hence to maximize the physics of power extraction by wind turbines.This research demonstrates the potential of an incompressible Navier-Stokes CFD method for the aerodynamic power performance analysis of horizontal axis wind turbine.The National Renewable Energy Laboratory USA-NREL (Technical Report NREL/Cp-500-28589) had carried out an experimental work aimed at the real time performance prediction of horizontal axis wind turbine.In addition to a comparison between the results reported by NREL made and CFD simulations,comparisons are made for the local flow angle at several stations ahead of the wind turbine blades.The comparison has shown that fairly good predictions can be made for pressure distribution and torque.Subsequently, the wind-field effects on the blade aerodynamics,as well as the blade/tower interaction,were investigated.The selected case corresponded to a 12.5 m/s up-wind HAWT at zero degree of yaw angle and a rotational speed of 25 rpm.The results obtained suggest that the present can cope well with the flows encountered around wind turbines.The areodynamic performance of the turbine and the flow details near and off the turbine blades and tower can be analysed using theses results.The aerodynamic performance of airfoils differs from one another.The performance mainly depends on co-efficient of performnace,co-efficient of lift,co-efficient of drag, velocity of fluid and angle of attack.This study shows that the velocity is not constant for all angles of attack of different airfoils.The performance parameters are calculated analytically and are compared with the standardized performance tests.For different angles of ,the velocity stall is determined for the better performance of a system with respect to velocity.The research addresses the effect of surface roughness factor on the blade surface at various sections.The numerical results were found to be in agreement with the experimental data.A relative advantage of the theoretical aerofoil design method is that it allows many different concepts to be explored economically.Such efforts are generally impractical in wind tunnels because of time and money constraints.Thus, the need for a theoretical aerofoil design method is threefold:first for the design of aerofoil that fall outside the range of applicability of existing calalogs:second,for the design of aerofoil that more exactly match the requirements of the intended application:and third,for the economic exploration of many aerofoil concepts.From the results obtained for the different aerofoils,the velocity is not constant for all angles of attack.The results obtained for the aerofoil mainly depend on angle of attack and velocity.The vortex generator technique was meticulously studies with the formulation of the specification for the right angle shaped vortex generators-VG.The results were validated in accordance with the primary analysis phase.The results were found to be in good agreement with the power curve.The introduction of correct size VGs at appropriate locations over the blades of the selected HAWT was found to increase the power generation by about 4%
Resumo:
The mechanism of generation of atomic Na and K from SiO2 samples has been studied using explicitly correlated wave function and density functional theory cluster calculations. Possible pathways for the photon and electron stimulated desorption of Na and K atoms from silicates are proposed, thus providing new insight on the generation of the tenuous Na and K atmosphere of the Moon.
Resumo:
Beta-glucosidases are critical enzymes in biomass hydrolysis process and is important in creating highly efficient enzyme cocktails for the bio-ethanol industry. Among the two strategies proposed for overcoming the glucose inhibition of commercial cellulases, one is to use heavy dose of BGL in the enzyme blends and the second is to do simultaneous saccharification and fermentation where glucose is converted to alcohol as soon as it is being generated. While the former needs extremely high quantities of enzyme, the latter is inefficient since the conditions for hydrolysis and fermentation are different. This makes the process technically challenging and also in this case, the alcohol generation is lesser, making its recovery difficult. A third option is to use glucose tolerant β-glucosidases which can work at elevated glucose concentrations. However, there are very few reports on such enzymes from microbial sources especially filamentous fungi which can be cultivated on cheap biomass as raw material. There has been very less number of studies directed at this, though there is every possibility that filamentous fungi that are efficient degraders of biomass may harbor such enzymes. The study therefore aimed at isolating a fungus capable of secreting glucose tolerant β- glucosidase enzyme. Production, characterization of β-glucosidases and application of BGL for bioethanol production were attempted.
Resumo:
In symmetric block ciphers, substitution and diffusion operations are performed in multiple rounds using sub-keys generated from a key generation procedure called key schedule. The key schedule plays a very important role in deciding the security of block ciphers. In this paper we propose a complex key generation procedure, based on matrix manipulations, which could be introduced in symmetric ciphers. The proposed key generation procedure offers two advantages. First, the procedure is simple to implement and has complexity in determining the sub-keys through crypt analysis. Secondly, the procedure produces a strong avalanche effect making many bits in the output block of a cipher to undergo changes with one bit change in the secret key. As a case study, matrix based key generation procedure has been introduced in Advanced Encryption Standard (AES) by replacing the existing key schedule of AES. The key avalanche and differential key propagation produced in AES have been observed. The paper describes the matrix based key generation procedure and the enhanced key avalanche and differential key propagation produced in AES. It has been shown that, the key avalanche effect and differential key propagation characteristics of AES have improved by replacing the AES key schedule with the Matrix based key generation procedure