940 resultados para MODEL (Computer program language)
Resumo:
Software corpora facilitate reproducibility of analyses, however, static analysis for an entire corpus still requires considerable effort, often duplicated unnecessarily by multiple users. Moreover, most corpora are designed for single languages increasing the effort for cross-language analysis. To address these aspects we propose Pangea, an infrastructure allowing fast development of static analyses on multi-language corpora. Pangea uses language-independent meta-models stored as object model snapshots that can be directly loaded into memory and queried without any parsing overhead. To reduce the effort of performing static analyses, Pangea provides out-of-the box support for: creating and refining analyses in a dedicated environment, deploying an analysis on an entire corpus, using a runner that supports parallel execution, and exporting results in various formats. In this tool demonstration we introduce Pangea and provide several usage scenarios that illustrate how it reduces the cost of analysis.
Resumo:
Purpose Malposition of the acetabular component in total hip arthroplasty (THA) is a common surgical problem that can lead to hip dislocation, reduced range of motion and may result in early loosening. The aim of this study is to validate the accuracy and reproducibility of a single x-ray image based 2D/3D reconstruction technique in determining cup inclination and anteversion against two different computer tomography (CT)-based measurement techniques. Methods Cup anteversion and inclination of 20 patients after cementless primary THA was measured on standard anteroposterior (AP) radiographs with the help of the single x-ray 2D/3D reconstruction program and compared with two different 3D CT-based analyses [Ground Truth (GT) and MeVis (MV) reconstruction model]. Results The measurements from the single x-ray 2D/3D reconstruction technique were strongly correlated with both types of CT image-processing protocols for both cup inclination [R²=0.69 (GT); R²=0.59 (MV)] and anteversion [R²=0.89 (GT); R²=0.80 (MV)]. Conclusions The single x-ray image based 2D/3D reconstruction technique is a feasible method to assess cup position on postoperative x-rays. CTscans remain the golden standard for a more complex biomechanical evaluation when a lower tolerance limit (+/-2 degrees) is required.
Computer model simulation of alveolar phase III slopes: Implications for tidal single-breath washout
Resumo:
Periacetabular Osteotomy (PAO) is a joint preserving surgical intervention intended to increase femoral head coverage and thereby to improve stability in young patients with hip dysplasia. Previously, we developed a CT-based, computer-assisted program for PAO diagnosis and planning, which allows for quantifying the 3D acetabular morphology with parameters such as acetabular version, inclination, lateral center edge (LCE) angle and femoral head coverage ratio (CO). In order to verify the hypothesis that our morphology-based planning strategy can improve biomechanical characteristics of dysplastic hips, we developed a 3D finite element model based on patient-specific geometry to predict cartilage contact stress change before and after morphology-based planning. Our experimental results demonstrated that the morphology-based planning strategy could reduce cartilage contact pressures and at the same time increase contact areas. In conclusion, our computer-assisted system is an efficient tool for PAO planning.
Resumo:
As schools are pressured to perform on academics and standardized examinations, schools are reluctant to dedicate increased time to physical activity. After-school exercise and health programs may provide an opportunity to engage in more physical activity without taking time away from coursework during the day. The current study is a secondary data analysis of data from a randomized trial of a 10-week after-school program (six schools, n = 903) that implemented an exercise component based on the CATCH physical activity component and health modules based on the culturally-tailored Bienestar health education program. Outcome variables included BMI and aerobic capacity, health knowledge and healthy food intentions as assessed through path analysis techniques. Both the baseline model (χ2 (df = 8) = 16.90, p = .031; RMSEA = .035 (90% CI of .010–.058), NNFI = 0.983 and the CFI = 0.995) and the model incorporating intervention participation proved to be a good fit to the data (χ2 (df = 10) = 11.59, p = .314. RMSEA = .013 (90% CI of .010–.039); NNFI = 0.996 and CFI = 0.999). Experimental group participation was not predictive of changes in health knowledge, intentions to eat healthy foods or changes in Body Mass Index, but it was associated with increased aerobic capacity, β = .067, p < .05. School characteristics including SES and Language proficiency proved to be significantly associated with changes in knowledge and physical indicators. Further effects of school level variables on intervention outcomes are recommended so that tailored interventions can be developed aimed at the specific characteristics of each participating school. ^
Resumo:
Background. This study was designed to evaluate the effects of the Young Leaders for Healthy Change program, an internet-delivered program in the school setting that emphasized health advocacy skills-development, on nutrition and physical activity behaviors among older adolescents (13–18 years). The program consisted of online curricular modules, training modules, social media, peer and parental support, and a community service project. Module content was developed based on Social Cognitive Theory and known determinants of behavior for older adolescents. ^ Methods. Of the 283 students who participated in the fall 2011 YL program, 38 students participated in at least ten of the 12 weeks and were eligible for this study. This study used a single group-only pretest/posttest evaluation design. Participants were 68% female, 58% white/Caucasian, 74% 10th or 11th graders, and 89% mostly A and/or B students. The primary behavioral outcomes for this analysis were participation in 60-minutes of physical activity per day, 20-minutes of vigorous- or moderate- intensity physical activity (MVPA) participation per day, television and computer time, fruit and vegetable (FV) intake, sugar-sweetened beverage intake, and consumption of breakfast, home-cooked meals, and fast food. Other outcomes included knowledge, beliefs, and attitudes related to healthy eating, physical activity, and advocacy skills. ^ Findings. Among the 38 participants, no significant changes in any variables were observed. However, among those who did not previously meet behavioral goals there was an 89% increase in students who participated in more than 20 minutes of MVPA per day and a 58% increase in students who ate home-cooked meals 5–7 days per week. The majority of participants met program goals related to knowledge, beliefs, and attitudes prior to the start of the program. Participants reported either maintaining or improving to the goal at posttest for all items except FV intake knowledge, taste and affordability of healthy foods, interest in teaching others about being healthy, and ease of finding ways to advocate in the community. ^ Conclusions. The results of this evaluation indicated that promoting healthy behaviors requires different strategies than maintaining healthy behaviors among high school students. In the school setting, programs need to target the promotion and maintenance of health behaviors to engage all students who participate in the program as part of a class or club activity. Tailoring the program using screening and modifying strategies to meet the needs of all students may increase the potential reach of the program. The Transtheoretical Model may provide information on how to develop a tailored program. Additional research on how to utilize the constructs of TTM effectively among high school students needs to be conducted. Further evaluation studies should employ a more expansive evaluation to assess the long-term effectiveness of health advocacy programming.^
Resumo:
Sediment spectral reflectance measurements were generated aboard the JOIDES Resolution during Ocean Drilling Program Leg 162 shipboard operations. The large size of the raw data set (over 1.3 gigabytes) and limited computer hard disk storage space precluded detailed analysis of the data at sea, although broad band averages were used as aids in developing splices and determining lithologic boundaries. This data report describes the methods used to collect these data and their shipboard and postcruise processing. These initial results provide the basis for further postcruise research.
The impact of a computer based adult literacy program on literacy and numeracy : evidence from India
Resumo:
With over 700 million illiterate adults in the world, many governments have implemented adult literacy programs across the world, although typically with low rates of success partly because the quality of teaching is low. One solution may lie in the standardization of teaching provided by computer-aided instruction. We present the first rigorous evidence of the effectiveness of a computer-based adult literacy program. A randomized control trial study of TARA Akshar Plus, an Indian adult literacy program, was implemented in the state of Uttar Pradesh in India. We find large, significant impacts of this computer-aided program on literacy and numeracy outcomes. We compare the improvement in learning to that of other traditional adult literacy programs and conclude that TARA Akshar Plus is effective in increasing literacy and numeracy for illiterate adult women.
Resumo:
This paper presents a blended learning approach and a study evaluating instruction in a software engineering-related course unit as part of an undergraduate engineering degree program in computing. In the past, the course unit had a lecture-based format. In view of student underachievement and the high course unit dropout rate, a distance-learning system was deployed, where students were allowed to choose between a distance-learning approach driven by a moderate constructivist instructional model or a blended-learning approach. The results of this experience are presented, with the aim of showing the effectiveness of the teaching/learning system deployed compared to the lecture-based system previously in place. The grades earned by students under the new system, following the distance-learning and blended-learning courses, are compared statistically to the grades attained in earlier years in the traditional face-to-face classroom (lecture-based) learning.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
A new formalism, called Hiord, for defining type-free higherorder logic programming languages with predicate abstraction is introduced. A model theory, based on partial combinatory algebras, is presented, with respect to which the formalism is shown sound. A programming language built on a subset of Hiord, and its implementation are discussed. A new proposal for defining modules in this framework is considered, along with several examples.
Resumo:
Andorra-I is the first implementation of a language based on the Andorra Principie, which states that determinate goals can (and shonld) be run before other goals, and even in a parallel fashion. This principie has materialized in a framework called the Basic Andorra model, which allows or-parallelism as well as (dependent) and-parallelism for determinate goals. In this report we show that it is possible to further extend this model in order to allow general independent and-parallelism for nondeterminate goals, withont greatly modifying the underlying implementation machinery. A simple an easy way to realize such an extensión is to make each (nondeterminate) independent goal determinate, by using a special "bagof" constract. We also show that this can be achieved antomatically by compile-time translation from original Prolog programs. A transformation that fulfüls this objective and which can be easily antomated is presented in this report.
Resumo:
The Andorra family of languages (which includes the Andorra Kernel Language -AKL) is aimed, in principie, at simultaneously supporting the programming styles of Prolog and committed choice languages. On the other hand, AKL requires a somewhat detailed specification of control by the user. This could be avoided by programming in Prolog to run on AKL. However, Prolog programs cannot be executed directly on AKL. This is due to a number of factors, from more or less trivial syntactic differences to more involved issues such as the treatment of cut and making the exploitation of certain types of parallelism possible. This paper provides basic guidelines for constructing an automatic compiler of Prolog programs into AKL, which can bridge those differences. In addition to supporting Prolog, our style of translation achieves independent and-parallel execution where possible, which is relevant since this type of parallel execution preserves, through the translation, the user-perceived "complexity" of the original Prolog program.
Resumo:
We discuss a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from (global) static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be checked statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by user programs. We also report briefly on an implementation of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinacy, and computational cost, and can treat modules separately, performing incremental analysis.