957 resultados para SQL Query generation from examples
Resumo:
A flow injection hydride generation direct current plasma atomic emission spectrometric (FI-HG-DCP-AES) method was developed for the determination of lead at ng.ml-l level. Potassium ferricyanide (K3Fe(CN)6) was used along with sodium tetrahydroborate(III) (NaBH4) to produce plumbane (PbH4) in an acid medium. The design of a gas-liquid separator (hydride generator) was tested and the parameters of the flow injection system were optimized to achieve a good detection limit and sample throughput. The technique developed gave a detection limit of 0.7 ng.ml-l(3ob). The precision at 20 ng.ml"* level was 1.6 % RSD with 1 1 measurements (n=l 1). Volume of sample loop was 500 |J.l. A sample throughput of 120 h"^ was achieved. The transition elements, Fe(II), FeOH), Cd(n), Co(II), Mn(n), Ni(II) and Zn(n) do not interfere in this method but 1 mg,l'l Cu(II) will suppress 50 % of the signal from a sample containing 20 ng.ml'l Pb. This method was successfully applied to determine lead in a calcium carbonate (CaC03) matrix of banded coral skeletons from Si-Chang Island in Thailand.
Resumo:
Although much research has explored computer mediated communication for its application in second language instruction, there still exists a need for empirical results from research to guide practitioners who wish to introduce web-based activities into their instruction. This study was undertaken to explore collaborative online task-based activities for the instruction of ESL academic writing. Nine ESL students in their midtwenties, enrolled at a community college in Ontario, engaged in two separate online prewriting activities in both a synchronous and an asynchronous environment. The students were interviewed in order to explore their perceptions of how the activities affected the generation and organization of ideas for academic essays. These interviews were triangulated with examples of the students' online writing, nonparticipatory observations of the students' interactions, and a discussion with the course instructor. The results of the study reveal that a small majority of students felt that brainstorming in writing with their peers in an asynchronous online discussion created a grammatical and lexical framework that supported idea generation and organization. The students did not feel that the synchronous chat activity was as successful. Although they felt that this activity also contributed to the generation of ideas, synchronous chat introduced a level of difficulty in communication that hindered the students' engagement in the task and failed to assist them with the organization of their ideas. The students also noted positive aspects of the web-based activities that were not related to prewriting tasks, for example, improved typing and word processing skills. Directions for future research could explore whether online prewriting activities can assist students in the creation of essays that are syntactically or lexically complex.
Resumo:
During the last 30 years Aboriginal peoples in Canada have made steady progress in reclaiming the responsibility for the education of their young people, especially in primary and secondary school. In comparison the education and or training of adult populations has not kept pace and many socioeconomic and sociocultural indicators demonstrate a ' , continued confinement of those populations to the margins of the dominant society of Canada. It is the adults, the mothers and the fathers, the grandmothers and grandfathers, the aunties and uncles that are the first teachers of the next generation and the nature of these relationships replicates the culture of unwellness in each subsequent generation through those teachers. There are few examples in the Aboriginal adult education literatures that give voice to the educational experience of the Learner. This study addresses that gap by exploring the perspectives embedded in the stories of a Circle of Learners who are, or were enrolled in the Bachelor of Education in Aboriginal Adult Education program at Brock University. That Circle of 1 participants included 9 women and 1 man, 6 of whom were from various i Anishinabek nations while 4 represented the Hotinonshd:ni nations in southern Ontario. They are an eclectic group, representing many professions, age groups, spiritual traditions, and backgrounds. This then is their story, the story of the heaming and Healing pedagogy and an expanded vision of Aboriginal education and research at Brock University.
Resumo:
This study examined the challenges associated with the explicit delivery of questiongeneration strategy with 8 Arab Canadian students from the perspective of a bilingual beginning teacher. This study took place in a private school and involved 2 stages consisting of 9 instructional sessions, and individual interviews with the students. Data gathered from these interviews and the researcher's field notes from the sessions were used to gain insights about the participants' understanding and use of explicit instruction. The themes that emerged from the data included "teacher attitude," "students' enhanced metacognitive awareness and strategy use," "listening skills," and "instructional challenges." Briefly, teacher's attitude demonstrated how teacher's beliefs and knowledge influenced her willingness and perseverance to teach explicitly. Students' enhanced metacognitive awareness and strategy use included students' understanding and use of the question-generation strategy. The students' listening skills suggested that culture may influence their response to the delivery of explicit instruction. Here, the cultural expectations associated with being a good listener reinforced students' willingness to engage in this strategy. Students' prior knowledge also influenced their interaction with the question-generation strategy. Time for process versus covering content was a dominant instructional challenge. This study provides first hand information for teachers when considering how students' cultural backgrounds may affect their reactions to explicit strategy instruction.
Resumo:
Modifications to the commercial hydride generator, manufactured by Spectrametrics, resulted in improved operating procedure and enhancement of the arsenic and germanium signals. Experiments with arsenic(III) and arsenic(V) showed that identical reiults could be produced from both oxidation states. However, since arsenic(V) is reduced more slowly than arsenic(III), peak areas and not peak heights must be measured when the arsine is immediately stripped from the system (approximately 5 seconds reaction). When the reduction is allowed to proceed for 20 seconds before the arsine is stripped, peak heights may be used. For a 200 ng/mL solution, the relative standard deviation is 2.8% for As(III) and 3.8% for As(V). The detection limit for arsenic using the modified system is 0.50 ng/mL. Studies performed on As(V) standards show that the interferences from 1000 mg/L of nickel(II), cobalt(II), iron(III), copper(II), cadmium(II), and zinc(II) can be eliminated with the aid of 5 M Hel and 3% L-cystine. Conditions for the reduction of germanium to the corresponding hydride were investigated. The effect of different concentrations of HCl on the reduction of germanium to the covalent hydride in aqueous media by means of NaBH 4 solutions was assessed. Results show that the best response is accomplished at a pH of 1.7. The use of buffer solutions was similarly characterized. In both cases, results showed that the element is best reduced when the final pH of the solution after reaction is almost neutral. In addition, a more sensitive method, which includes the use of (NH4)2S208' has been developed. A 20% increase in the germanium signal is registered when compared to the signal achieved with Hel alone. Moreover, under these conditions, reduction of germanium could be accomplished, even when the solution's pH is neutral. For a 100 ng/mL germanium standard the rsd is 3%. The detection limit for germanium in 0.05 M Hel medium (pH 1.7) is 0.10 ng/mL and 0.09 ng/mL when ammonium persulphate is used in conjunction with Hel. Interferences from 1000 mg/L of iron(III), copper(II), cobalt(II), nickel(II), cadmium(II), lead(II), mercury(II), aluminum(III), tin(IV), arsenic(III), arsenic(V) and zinc(II) were studied and characterized. In this regard, the use of (NH4)ZS20S and Hel at a pH of 1.7 proved to be a successful mixture in the sbppression of the interferences caused by iron, copper, aluminum, tin, lead, and arsenic. The method was applied to the determination of germanium in cherts and iron ores. In addition, experiments with tin(IV) showed that a 15% increase in the tin signal can be accomplished in the presence of 1 mL of (NH4)2S20S 10% (m/V).
Resumo:
A method using L-cysteine for the determination of arsenous acid (As(III)), arsenic acid (As(V)), monomethylarsonic acid (MMAA), and dimethylarsinic acid (DMAA) by hydride generation was demonstrated. The instrument used was a d.c. plasma atomic emission spectrometer (OCP-AES). Complete recovery was reported for As(III), As(V), and DMAA while 86% recovery was reported for MMAA. Detection limits were determined, as arsenic for the species listed previously, to be 1.2, 0.8, 1.1, and 1.0 ngemL-l, respectively. Precision values, at 50 ngemL-1 arsenic concentration, were f.80/0, 2.50/0, 2.6% and 2.6% relative standard deviation, respectively. The L-cysteine reagent was compared directly with the conventional hydride generation technique which uses a potassium iodide-hydrochloric acid medium. Recoveries using L-cysteine when compared with the conventional method provided the following results: similar recoveries were obtained for As(III), slightly better recoveries were obtained for As(V) and MMAA, and significantly better recoveries for DMAA. In addition, tall and sharp peak shapes were observed for all four species when using L-cysteine. The arsenic speciation method involved separation by ion exchange .. high perfonnance liquid chromatography (HPLC) with on-line hydride generation using the L.. cysteine reagent and measurement byOCP-AES. Total analysis time per sample was 12 min while the time between the start of subsequent runs was approximately 20 min. A binary . gradient elution program, which incorporated the following two eluents: 0.01 and 0.5 mM tri.. sodium citrate both containing 5% methanol (v/v) and both at a pH of approximately 9, was used during the separation by HPLC. Recoveries of the four species which were measured as peak area, and were normalized against As(III), were 880/0, 290/0, and 40% for DMAA, MMAA and As(V), respectively. Resolution factors between adjacent analyte peaks of As(III) and DMAA was 1.1; DMAA and MMAA was 1.3; and MMAA and As(V) was 8.6. During the arsenic speciation study, signals from the d.c. plasma optical system were measured using a new photon-signal integrating device. The_new photon integrator developed and built in this laboratory was based on a previously published design which was further modified to reflect current available hardware. This photon integrator was interfaced to a personal computer through an AID convertor. The .photon integrator has adjustable threshold settings and an adjustable post-gain device.
Resumo:
Arsenic, bismuth, germanium, antimony and tin were simultaneously determined by continuous hydride generation and inductively coupled plasma-atomic emission spectrometry . I Hydrides were introduced into four different types of gas-liquid separators. Two of the gas-liquid separators were available in-house. A third was developed for this project and a fourth was based on a design used by CET AC. The best signal intensity was achieved by the type II frit-based gas-liquid separator, but the modified Cetac design gave promise for the future, due to low relative standard deviation. A method was developed for the determination of arsenic, bismuth, antimony and tin in low-alloy steels. Four standard reference materials from NIST were dissolved in 10 mL aqua regia without heat. Good agreement was obtained between experimental values and certified values for arsenic, bismuth, antimony and tin. The method was developed to provide the analyst with the opportunity to determine the analytes by using simple aqueous standards to prepare calibration lines. Within the limits of the samples analyzed, the method developed is independent of matrix.
Resumo:
A class of twenty-two grade one children was tested to determine their reading levels using the Stanford Diagnostic Reading Achievement Test. Based on these results and teacher input the students were paired according to reading ability. The students ages ranged from six years four months to seven years four months at the commencement of the study. Eleven children were assigned to the language experience group and their partners became the text group. Each member of the language experience group generated a list of eight to be learned words. The treatment consisted of exposing the student to a given word three times per session for ten sessions, over a period of five days. The dependent variables consisted of word identification speed, word identification accuracy, and word recognition accuracy. Each member of the text group followed the same procedure using his/her partner's list of words. Upon completion of this training, the entire process was repeated with members of the text group from the first part becoming members of the language experience group and vice versa. The results suggest that generally speaking language experience words are identified faster than text words but that there is no difference in the rate at which these words are learned. Language experience words may be identified faster because the auditory-semantic information is more readily available in them than in text words. The rate of learning in both types of words, however, may be dictated by the orthography of the to be learned word.
Resumo:
This study explores the stories and experiences of second-generation Portuguese Canadian secondary school students in Southern Ontario, Canada. The purpose of this research was to understand the educational experiences of students, specifically the successes, challenges, and struggles that the participants faced within the education system. Questions were also asked about identity issues and how participants perceived their identities influencing their educational experiences. Six Portuguese Canadian students in grades 9 to 11 were interviewed twice. The interviews ranged from 45 minutes to 90 minutes in length. Data analysis of qualitative, open-ended interviews, research journals, field notes and curricular documents yielded understandings about the participants' experiences and challenges in the education system. Eight themes emerged from data that explored the realities of everyday life for second-generatiop Portuguese Canadian students. These themes include: influences of part-time work on schooling, parental involvement, the teacher is key, challenges and barriers, the importance of peers, Portuguese Canadian identity, lack of focus on identity in curricul:um content, and the dropout problem. Recommendations in this study include the need for more community-based programs to assist students. Furthermore, teachers are encouraged to utilize strategies and curriculum resources that engage learners and integrate their histories and identities. Educators are encouraged to question power dynamics both inside and outside the school system. There is also a need for further research with Portuguese Canadian students who are struggling in the education system as well as an examination of the number of hours that students work.
Resumo:
Classical relational databases lack proper ways to manage certain real-world situations including imprecise or uncertain data. Fuzzy databases overcome this limitation by allowing each entry in the table to be a fuzzy set where each element of the corresponding domain is assigned a membership degree from the real interval [0…1]. But this fuzzy mechanism becomes inappropriate in modelling scenarios where data might be incomparable. Therefore, we become interested in further generalization of fuzzy database into L-fuzzy database. In such a database, the characteristic function for a fuzzy set maps to an arbitrary complete Brouwerian lattice L. From the query language perspectives, the language of fuzzy database, FSQL extends the regular Structured Query Language (SQL) by adding fuzzy specific constructions. In addition to that, L-fuzzy query language LFSQL introduces appropriate linguistic operations to define and manipulate inexact data in an L-fuzzy database. This research mainly focuses on defining the semantics of LFSQL. However, it requires an abstract algebraic theory which can be used to prove all the properties of, and operations on, L-fuzzy relations. In our study, we show that the theory of arrow categories forms a suitable framework for that. Therefore, we define the semantics of LFSQL in the abstract notion of an arrow category. In addition, we implement the operations of L-fuzzy relations in Haskell and develop a parser that translates algebraic expressions into our implementation.
Resumo:
L'utilisation des méthodes formelles est de plus en plus courante dans le développement logiciel, et les systèmes de types sont la méthode formelle qui a le plus de succès. L'avancement des méthodes formelles présente de nouveaux défis, ainsi que de nouvelles opportunités. L'un des défis est d'assurer qu'un compilateur préserve la sémantique des programmes, de sorte que les propriétés que l'on garantit à propos de son code source s'appliquent également au code exécutable. Cette thèse présente un compilateur qui traduit un langage fonctionnel d'ordre supérieur avec polymorphisme vers un langage assembleur typé, dont la propriété principale est que la préservation des types est vérifiée de manière automatisée, à l'aide d'annotations de types sur le code du compilateur. Notre compilateur implante les transformations de code essentielles pour un langage fonctionnel d'ordre supérieur, nommément une conversion CPS, une conversion des fermetures et une génération de code. Nous présentons les détails des représentation fortement typées des langages intermédiaires, et les contraintes qu'elles imposent sur l'implantation des transformations de code. Notre objectif est de garantir la préservation des types avec un minimum d'annotations, et sans compromettre les qualités générales de modularité et de lisibilité du code du compilateur. Cet objectif est atteint en grande partie dans le traitement des fonctionnalités de base du langage (les «types simples»), contrairement au traitement du polymorphisme qui demande encore un travail substantiel pour satisfaire la vérification de type.
Resumo:
Le réalisme des images en infographie exige de créer des objets (ou des scènes) de plus en plus complexes, ce qui entraîne des coûts considérables. La modélisation procédurale peut aider à automatiser le processus de création, à simplifier le processus de modification ou à générer de multiples variantes d'une instance d'objet. Cependant même si plusieurs méthodes procédurales existent, aucune méthode unique permet de créer tous les types d'objets complexes, dont en particulier un édifice complet. Les travaux réalisés dans le cadre de cette thèse proposent deux solutions au problème de la modélisation procédurale: une solution au niveau de la géométrie de base, et l’autre sous forme d'un système général adapté à la modélisation des objets complexes. Premièrement, nous présentons le bloc, une nouvelle primitive de modélisation simple et générale, basée sur une forme cubique généralisée. Les blocs sont disposés et connectés entre eux pour constituer la forme de base des objets, à partir de laquelle est extrait un maillage de contrôle pouvant produire des arêtes lisses et vives. La nature volumétrique des blocs permet une spécification simple de la topologie, ainsi que le support des opérations de CSG entre les blocs. La paramétrisation de la surface, héritée des faces des blocs, fournit un soutien pour les textures et les fonctions de déplacements afin d'appliquer des détails de surface. Une variété d'exemples illustrent la généralité des blocs dans des contextes de modélisation à la fois interactive et procédurale. Deuxièmement, nous présentons un nouveau système de modélisation procédurale qui unifie diverses techniques dans un cadre commun. Notre système repose sur le concept de composants pour définir spatialement et sémantiquement divers éléments. À travers une série de déclarations successives exécutées sur un sous-ensemble de composants obtenus à l'aide de requêtes, nous créons un arbre de composants définissant ultimement un objet dont la géométrie est générée à l'aide des blocs. Nous avons appliqué notre concept de modélisation par composants à la génération d'édifices complets, avec intérieurs et extérieurs cohérents. Ce nouveau système s'avère général et bien adapté pour le partionnement des espaces, l'insertion d'ouvertures (portes et fenêtres), l'intégration d'escaliers, la décoration de façades et de murs, l'agencement de meubles, et diverses autres opérations nécessaires lors de la construction d'un édifice complet.
Resumo:
Dans ce mémoire, nous nous pencherons tout particulièrement sur une primitive cryptographique connue sous le nom de partage de secret. Nous explorerons autant le domaine classique que le domaine quantique de ces primitives, couronnant notre étude par la présentation d’un nouveau protocole de partage de secret quantique nécessitant un nombre minimal de parts quantiques c.-à-d. une seule part quantique par participant. L’ouverture de notre étude se fera par la présentation dans le chapitre préliminaire d’un survol des notions mathématiques sous-jacentes à la théorie de l’information quantique ayant pour but primaire d’établir la notation utilisée dans ce manuscrit, ainsi que la présentation d’un précis des propriétés mathématique de l’état de Greenberger-Horne-Zeilinger (GHZ) fréquemment utilisé dans les domaines quantiques de la cryptographie et des jeux de la communication. Mais, comme nous l’avons mentionné plus haut, c’est le domaine cryptographique qui restera le point focal de cette étude. Dans le second chapitre, nous nous intéresserons à la théorie des codes correcteurs d’erreurs classiques et quantiques qui seront à leur tour d’extrême importances lors de l’introduction de la théorie quantique du partage de secret dans le chapitre suivant. Dans la première partie du troisième chapitre, nous nous concentrerons sur le domaine classique du partage de secret en présentant un cadre théorique général portant sur la construction de ces primitives illustrant tout au long les concepts introduits par des exemples présentés pour leurs intérêts autant historiques que pédagogiques. Ceci préparera le chemin pour notre exposé sur la théorie quantique du partage de secret qui sera le focus de la seconde partie de ce même chapitre. Nous présenterons alors les théorèmes et définitions les plus généraux connus à date portant sur la construction de ces primitives en portant un intérêt particulier au partage quantique à seuil. Nous montrerons le lien étroit entre la théorie quantique des codes correcteurs d’erreurs et celle du partage de secret. Ce lien est si étroit que l’on considère les codes correcteurs d’erreurs quantiques étaient de plus proches analogues aux partages de secrets quantiques que ne leur étaient les codes de partage de secrets classiques. Finalement, nous présenterons un de nos trois résultats parus dans A. Broadbent, P.-R. Chouha, A. Tapp (2009); un protocole sécuritaire et minimal de partage de secret quantique a seuil (les deux autres résultats dont nous traiterons pas ici portent sur la complexité de la communication et sur la simulation classique de l’état de GHZ).
Resumo:
INTRODUCTION: Emerging evidence indicates that nitric oxide (NO), which is increased in osteoarthritic (OA) cartilage, plays a role in 4-hydroxynonenal (HNE) generation through peroxynitrite formation. HNE is considered as the most reactive product of lipid peroxidation (LPO). We have previously reported that HNE levels in synovial fluids are more elevated in knees of OA patients compared to healthy individuals. We also demonstrated that HNE induces a panoply of inflammatory and catabolic mediators known for their implication in OA cartilage degradation. The aim of the present study was to investigate the ability of inducible NO synthase (iNOS) inhibitor, L-NIL (L-N6-(L-Iminoethyl)Lysine), to prevent HNE generation through NO inhibition in human OA chondrocytes. METHOD: Cells and cartilage explants were treated with or without either an NO generator (SIN or interleukin 1beta (IL-1β)) or HNE in absence or presence of L-NIL. Protein expression of both iNOS and free-radical-generating NOX subunit p47 (phox) were investigated by western blot. iNOS mRNA detection was measured by real-time RT-PCR. HNE production was analysed by ELISA, Western blot and immunohistochemistry. S-nitrosylated proteins were evaluated by Western Blot. Prostaglandin E2 (PGE2) and metalloproteinase 13 (MMP-13) levels as well as glutathione S-transferase (GST) activity were each assessed with commercial kits. NO release was determined using improved Griess method. Reactive oxygen species (ROS) generation was revealed using fluorescent microscopy with the use of commercial kits. RESULTS: L-NIL prevented IL-1β-induced NO release, iNOS expression at protein and mRNA levels, S-nitrosylated proteins and HNE in a dose dependent manner after 24h of incubation. Interestingly, we revealed that L-NIL abolished IL-1β-induced NOX component p47phox as well as ROS release. The HNE-induced PGE2 release and both cyclooxygenase-2 (COX-2) and MMP-13 expression were significantly reduced by L-NIL addition. Furthermore, L-NIL blocked the IL-1β induced inactivation of GST, an HNE-metabolizing enzyme. Also, L-NIL prevented HNE induced cell death at cytotoxic levels. CONCLUSION: Altogether, our findings support a beneficial effect of L-NIL in OA by preventing LPO process in NO-dependent and/or independent mechanisms.
Resumo:
Série de l'Observatoire des fédérations