958 resultados para WEB (Computer program language)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Writing is an academic skill critical to students in today's schools as it serves as a predominant means for demonstrating knowledge during school years (Graham, 2008). However, for many students with Specific Learning Disabilities (SLD), learning to write is a challenging, complex process (Lane, Graham, Harris, & Weisenbach, 2006). Students SLD have substantial writing challenges related to the nature of their disability (Mayes & Calhoun, 2005). ^ This study investigated the effects of computer graphic organizer software on the narrative writing compositions of four, fourth- and fifth-grade, elementary-level boys with SLD. A multiple baseline design across subjects was used to explore the effects of the computer graphic organizer software on four dependent variables: total number of words, total planning time, number of common story elements, and overall organization. ^ Prior to baseline, participants were taught the fundamentals of narrative writing. Throughout baseline and intervention, participants were read a narrative writing prompt and were allowed up to 10 minutes to plan their writing, followed by 15 minutes for writing, and 5 minutes of editing. During baseline, all planning was done using paper and pencil. During intervention, planning was done on the computer using a graphic organizer developed from the software program Kidspiration 3.0 (2011). All compositions were written and editing was done using paper and pencil during baseline and intervention. ^ The results of this study indicated that to varying degrees computer graphic organizers had a positive effect on the narrative writing abilities of elementary aged students with SLD. Participants wrote more words (from 54.74 to 96.60 more), planned for longer periods of time (from 4.50 to 9.50 more minutes), and included more story elements in their compositions (from 2.00 to 5.10 more out of a possible 6). There were nominal to no improvements in overall organization across the 4 participants. ^ The results suggest that teachers of students with SLD should considering use computer graphic organizers in their narrative writing instruction, perhaps in conjunction with remedial writing strategies. Future investigations can include other types of writing genres, other stages of writing, participants with varied demographics and their use combined with remedial writing instruction. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study investigated the influence that receiving instruction in two languages, English and Spanish, had on the performance of students enrolled in the International Studies Program (delayed partial immersion model) of Miami Dade County Public Schools on a standardized test in English, the Stanford Achievement Test, eighth edition, for three of its sections, Reading Comprehension, Mathematics Computations, and Mathematics Applications.^ The performance of the selected IS program/Spanish section cohort of students (N = 55) on the SAT Reading Comprehension, Mathematics Computation, and Mathematics Application along four consecutive years was contrasted with that of a control group of comparable students selected within the same feeder pattern where the IS program is implemented (N = 21). The performance of the group was also compared to the cross-sectional achievement patterns of the school's corresponding feeder pattern, region, and district.^ The research model for the study was a variation of the "causal-comparative" or "ex post facto design" sometimes referred to as "prospective". After data were collected from MDCPS, t-tests were performed to compare IS-Spanish students SAT performance for grades 3 to 6 for years 1994 to 1997 to control group, feeder pattern, region and district norms for each year for Reading Comprehension, Mathematics Computation, and Mathematics Applications. Repeated measures ANOVA and Tukey's tests were calculated to compare the mean percentiles of the groups under study and the possible interactions of the different variables. All tests were performed at the 5% significance level.^ From the analyses of the tests it was deduced that the IS group performed significantly better than the control group for all the three measures along the four years. The IS group mean percentiles on the three measures were also significantly higher than those of the feeder pattern, region, and district. The null hypotheses were rejected and it was concluded that receiving instruction in two languages did not negatively affect the performance of IS program students on tests taken in English. It was also concluded that the particular design the IS program enhances the general performance of participant students on Standardized tests.^ The quantitative analyses were coupled with interviews from teachers and administrators of the IS program to gain additional insight about different aspects of the implementation of the program at each particular school. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this research was to apply model checking by using a symbolic model checker on Predicate Transition Nets (PrT Nets). A PrT Net is a formal model of information flow which allows system properties to be modeled and analyzed. The aim of this thesis was to use the modeling and analysis power of PrT nets to provide a mechanism for the system model to be verified. Symbolic Model Verifier (SMV) was the model checker chosen in this thesis, and in order to verify the PrT net model of a system, it was translated to SMV input language. A software tool was implemented which translates the PrT Net into SMV language, hence enabling the process of model checking. The system includes two parts: the PrT net editor where the representation of a system can be edited, and the translator which converts the PrT net into an SMV program.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work was supported by the Spanish Ministry for Economy and Competitiveness (grant TIN2014-56633-C3-1-R) and by the European Regional Development Fund (ERDF/FEDER) and the Galician Ministry of Education (grants GRC2014/030 and CN2012/151). Alejandro Ramos-Soto is supported by the Spanish Ministry for Economy and Competitiveness (FPI Fellowship Program) under grant BES-2012-051878.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research aimed to describe, understand, and discuss the curriculum development process of a Brazilian-Portuguese heritage language community-based school in South Florida. This study was guided by the following research questions: (a) What roles does this HL community-based school aim to play for its students? This investigation was also related to the subsidiary question: (b) How does this HL community-based school organize its curriculum development process? In order to explore these research questions, I observed and interviewed teachers and coordinators based on a qualitative research approach. I analyzed the interviews’ transcripts, and the program’s website with a central focus of describing and understanding their curriculum development process. Hopefully, the findings will help Brazilian and other HL community schools toward discussing and elaborating their own curriculum development, as well as to look for specific teacher training courses.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Many school-based interventions are being delivered in the absence of evidence of effectiveness (Snowling & Hulme, 2011, Br. J. Educ. Psychol., 81, 1).Aim: This study sought to address this oversight by evaluating the effectiveness of the commonly used the Lexia Reading Core5 intervention, with 4- to 6-year-old pupils in Northern Ireland.Sample: A total of 126 primary school pupils in year 1 and year 2 were screened on the Phonological Assessment Battery 2nd Edition (PhAB-2). Children were recruited from the equivalent year groups to Reception and Year 1 in England and Wales, and Pre-kindergarten and Kindergarten in North America.
Methods: A total of 98 below-average pupils were randomized (T0) to either an 8-week block (inline image = 647.51 min, SD = 158.21) of daily access to Lexia Reading Core5 (n = 49) or a waiting-list control group (n = 49). Assessment of phonological skills was completed at post-intervention (T1) and at 2-month follow-up (T2) for the intervention group only.
Results: Analysis of covariance which controlled for baseline scores found that the Lexia Reading Core5 intervention group made significantly greater gains in blending, F(1, 95) = 6.50, p = .012, partial η2 = .064 (small effect size) and non-word reading, F(1, 95) = 7.20, p = .009, partial η2 = .070 (small effect size). Analysis of the 2-month follow-up of the intervention group found that all group treatment gains were maintained. However, improvements were not uniform among the intervention group with 35% failing to make progress despite access to support. Post-hoc analysis revealed that higher T0 phonological working memory scores predicted improvements made in phonological skills.
Conclusions: An early-intervention, computer-based literacy program can be effective in boosting the phonological skills of 4- to 6-year-olds, particularly if these literacy difficulties are not linked to phonological working memory deficits.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Les problématiques de surplus de poids sont en augmentation depuis les dernières décennies, notamment chez les jeunes québécois. Cette augmentation est en lien avec des habitudes alimentaires présentant des différences importantes avec les recommandations nutritionnelles. De plus, le gouvernement provincial a instauré des changements importants au Programme de formation de l’école québécoise afin de stimuler l’adoption de saines habitudes de vie. Afin de contrer ces problématiques de surplus de poids et d’habitudes alimentaires déficientes et de poursuivre dans la lignée de la réforme scolaire, le Nutriathlon en équipe version Web a été développé. Ce programme a pour but d’amener chaque participant à améliorer la qualité de son alimentation en augmentant et en diversifiant sa consommation de légumes, de fruits et de produits laitiers. Les objectifs de la présente étude sont (1) d’évaluer l’impact du programme sur la consommation de légumes, de fruits (LF) et de produits laitiers (PL) d’élèves du secondaire et (2) d’évaluer les facteurs influençant la réussite du programme chez ces jeunes. Les résultats de l’étude ont démontré que pendant le programme ainsi qu’immédiatement après, le groupe intervention a rapporté une augmentation significative de la consommation de LF et de PL par rapport au groupe contrôle. Par contre, aucun effet n’a pu être observé à moyen terme. Quant aux facteurs facilitant le succès du Nutriathlon en équipe, les élèves ont mentionné : l’utilisation de la technologie pour la compilation des portions, la formation d’équipes, l’implication des enseignants et de l’entourage familial ainsi que la création de stratégies pour faciliter la réussite du programme. Les élèves ont également mentionné des barrières au succès du Nutriathlon en équipe telles que le manque d’assiduité à saisir leurs données en dehors des heures de classe, la dysfonction du code d’utilisateur et l’incompatibilité de la plateforme avec certains outils technologiques comme les tablettes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article describes the design and implementation of computer-aided tool called Relational Algebra Translator (RAT) in data base courses, for the teaching of relational algebra. There was a problem when introducing the relational algebra topic in the course EIF 211 Design and Implementation of Databases, which belongs to the career of Engineering in Information Systems of the National University of Costa Rica, because students attending this course were lacking profound mathematical knowledge, which led to a learning problem, being this an important subject to understand what the data bases search and request do RAT comes along to enhance the teaching-learning process.It introduces the architectural and design principles required for its implementation, such as: the language symbol table, the gramatical rules and the basic algorithms that RAT uses to translate from relational algebra to SQL language. This tool has been used for one periods and has demonstrated to be effective in the learning-teaching process.  This urged investigators to publish it in the web site: www.slinfo.una.ac.cr in order for this tool to be used in other university courses.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The big data era has dramatically transformed our lives; however, security incidents such as data breaches can put sensitive data (e.g. photos, identities, genomes) at risk. To protect users' data privacy, there is a growing interest in building secure cloud computing systems, which keep sensitive data inputs hidden, even from computation providers. Conceptually, secure cloud computing systems leverage cryptographic techniques (e.g., secure multiparty computation) and trusted hardware (e.g. secure processors) to instantiate a “secure” abstract machine consisting of a CPU and encrypted memory, so that an adversary cannot learn information through either the computation within the CPU or the data in the memory. Unfortunately, evidence has shown that side channels (e.g. memory accesses, timing, and termination) in such a “secure” abstract machine may potentially leak highly sensitive information, including cryptographic keys that form the root of trust for the secure systems. This thesis broadly expands the investigation of a research direction called trace oblivious computation, where programming language techniques are employed to prevent side channel information leakage. We demonstrate the feasibility of trace oblivious computation, by formalizing and building several systems, including GhostRider, which is a hardware-software co-design to provide a hardware-based trace oblivious computing solution, SCVM, which is an automatic RAM-model secure computation system, and ObliVM, which is a programming framework to facilitate programmers to develop applications. All of these systems enjoy formal security guarantees while demonstrating a better performance than prior systems, by one to several orders of magnitude.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the context of computer numerical control (CNC) and computer aided manufacturing (CAM), the capabilities of programming languages such as symbolic and intuitive programming, program portability and geometrical portfolio have special importance -- They allow to save time and to avoid errors during part programming and permit code re-usage -- Our updated literature review indicates that the current state of art presents voids in parametric programming, program portability and programming flexibility -- In response to this situation, this article presents a compiler implementation for EGCL (Extended G-code Language), a new, enriched CNC programming language which allows the use of descriptive variable names, geometrical functions and flow-control statements (if-then-else, while) -- Our compiler produces low-level generic, elementary ISO-compliant Gcode, thus allowing for flexibility in the choice of the executing CNC machine and in portability -- Our results show that readable variable names and flow control statements allow a simplified and intuitive part programming and permit re-usage of the programs -- Future work includes allowing the programmer to define own functions in terms of EGCL, in contrast to the current status of having them as library built-in functions

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis provides a corpus-assisted pragmatic investigation of three Japanese expressions commonly signalled as apologetic, namely gomen, su(m)imasen and mōshiwake arimasen, which can be roughly translated in English with ‘(I’m) sorry’. The analysis is based on a web corpus of 306,670 tokens collected from the Q&A website Yahoo! Chiebukuro, which is examined combining quantitative (statistical) and qualitative (traditional close reading) methods. By adopting a form-to-function approach, the aim of the study is to shed light on three main topics of interest: the pragmatic functions of apology-like expressions, the discursive strategies they co-occur with, and the behaviours that warrant them. The overall findings reveal that apology-like expressions are multifunctional devices whose meanings extend well beyond ‘apology’ alone. These meanings are affected by a number of discursive strategies that can either increase or decrease the perceived (im)politeness level of the speech act to serve interactants’ face needs and communicative goals. The study also identifies a variety of behaviours that people frame as violations, not necessarily because they are actually face-threatening to the receiver, but because doing so is functional to the projection of the apologiser as a moral persona. An additional finding that emerged from the analysis is the pervasiveness of reflexive usages of apology-like expressions, which are often employed metadiscursively to convey, negotiate and challenge opinions on how language should be used. To conclude, the study provides a unique insight into the use of three expressions whose pragmatic meanings are more varied than anticipated. The findings reflect the use of (im)politeness in an online and non-Western context and, hopefully, represent a step towards a more inclusive notion of ‘apologies’ and related speech acts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The participation in the Language Toolkit program—a joint initiative of the Department of Interpretation and Translation of Forlì (University of Bologna) and the Chamber of Commerce of Romagna—led to the creation of this dissertation. This program aims to support the internationalization of SMEs while also introducing near-graduates to a real professional context. The author collaborated on this project with Leonori, a company that produces and sells high jewelry products online and through retailers. The purpose of this collaboration is to translate and localize a part of the company website from Italian into Chinese, so as to facilitate their internationalization process to Chinese-speaking countries. This dissertation is organized according to the usual stages a translator goes through when carrying out a translation task. First and foremost, however, it was necessary to provide a theoretical background pertaining to the topics of the project. Specifically, the first chapter introduces the Language Toolkit program, the concept of internationalization and the company itself. The second chapter is dedicated to the topic of inverse translation, including the advantages and disadvantages of this practice. The third chapter highlights the main features of localization, with a particular emphasis on web localization. The fourth chapter deals with the analysis of the source text, according to the looping model developed by Nord (1991). The fifth chapter describes in detail the methods implemented for the creation of the language resources i.e., two comparable monolingual corpora and a termbase, which were built ad hoc for this specific project. In the sixth chapter all the translation strategies that were implemented are outlined, providing some examples from the source text. The final chapter describes the revision process, which occurred both during and after the translation phase.