1000 resultados para Compilation process


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper discusses particular linguistic challenges in the task of reusing published dictionaries, conceived as structured sources of lexical information, in the compilation process of a machine-tractable thesaurus-like lexical database for Brazilian Portuguese. After delimiting the scope of the polysemous term thesaurus, the paper focuses on the improvement of the resulting object by a small team, in a form compatible with and inspired by WordNet guidelines, comments on the dictionary entries, addresses selected problems found in the process of extracting the relevant lexical information form the selected dictionaries, and provides some strategies to overcome them.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present and evaluate a compiler from Prolog (and extensions) to JavaScript which makes it possible to use (constraint) logic programming to develop the client side of web applications while being compliant with current industry standards. Targeting JavaScript makes (C)LP programs executable in virtually every modern computing device with no additional software requirements from the point of view of the user. In turn, the use of a very high-level language facilitates the development of high-quality, complex software. The compiler is a back end of the Ciao system and supports most of its features, including its module system and its rich language extension mechanism based on packages. We present an overview of the compilation process and a detailed description of the run-time system, including the support for modular compilation into separate JavaScript code. We demonstrate the maturity of the compiler by testing it with complex code such as a CLP(FD) library written in Prolog with attributed variables. Finally, we validate our proposal by measuring the performance of some LP and CLP(FD) benchmarks running on top of major JavaScript engines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nos últimos anos tem-se verificado um acentuado aumento na utilização de dispositivos moveis a nível internacional, pelo que as aplicações desenvolvidas para este tipo específico de dispositivos, conhecidas por apps, tem vindo a ganhar uma enorme popularidade. São cada vez mais as empresas que procuram estar presentes nos mais diversos sistemas operativos móveis, com o objectivo de suportar e desenvolver o seu negócio, alargando o seu leque de possíveis consumidores. Neste sentido surgiram diversas ferramentas com a função de facilitar o desenvolvimento de aplicações móveis, denominadas frameworks multi-plataforma. Estas frameworks conduziram ao aparecimento de plataformas web, que permitem criar aplicações multi-plataforma sem ser obrigatório ter conhecimentos em programação. Assim, e a partir da análise de vários criadores online de aplicações móveis identificados e das diferentes estratégias de desenvolvimento de aplicações móveis existentes, foi proposta a implementação de uma plataforma web capaz de criar aplicações nativas Android e iOS, dois dos sistemas operativos mais utilizados na actualidade. Apos desenvolvida a plataforma web, designada MobileAppBuilder, foi avaliada a sua Qualidade e as aplicações criadas pela mesma, através do preenchimento de um questionário por parte de 10 indivíduos com formação em Engenharia Informática, resultando numa classificação geral de ”excelente”. De modo a analisar o desempenho das aplicações produzidas pela plataforma desenvolvida, foram realizados testes comparativos entre uma aplicação da MobileAppBuilder e duas homologas de dois dos criadores online estudados, nomeadamente Andromo e Como. Os resultados destes testes revelaram que a MobileAppBuilder gera aplicações menos pesadas, mais rápidas e mais eficientes em alguns aspetos, nomeadamente no arranque.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper reports the ongoing project (since 2002) of developing a wordnet for Brazilian Portuguese (Wordnet.Br) from scratch. In particular, it describes the process of constructing the Wordnet.Br core database, which has 44,000 words organized in 18,500 synsets Accordingly, it briefly sketches the project overall methodology, its lexical resourses, the synset compilation process, and the Wordnet.Br editor, a GUI (graphical user interface) which aids the linguist in the compilation and maintenance of the Wordnet.Br. It concludes with the planned further work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increasing cost of developing complex software systems has created a need for tools which aid software construction. One area in which significant progress has been made is with the so-called Compiler Writing Tools (CWTs); these aim at automated generation of various components of a compiler and hence at expediting the construction of complete programming language translators. A number of CWTs are already in quite general use, but investigation reveals significant drawbacks with current CWTs, such as lex and yacc. The effective use of a CWT typically requires a detailed technical understanding of its operation and involves tedious and error-prone input preparation. Moreover, CWTs such as lex and yacc address only a limited aspect of the compilation process; for example, actions necessary to perform lexical symbol valuation and abstract syntax tree construction must be explicitly coded by the user. This thesis presents a new CWT called CORGI (COmpiler-compiler from Reference Grammar Input) which deals with the entire `front-end' component of a compiler; this includes the provision of necessary data structures and routines to manipulate them, both generated from a single input specification. Compared with earlier CWTs, CORGI has a higher-level and hence more convenient user interface, operating on a specification derived directly from a `reference manual' grammar for the source language. Rather than developing a compiler-compiler from first principles, CORGI has been implemented by building a further shell around two existing compiler construction tools, namely lex and yacc. CORGI has been demonstrated to perform efficiently in realistic tests, both in terms of speed and the effectiveness of its user interface and error-recovery mechanisms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research project is based on the Multimodal Corpus of Chinese Court Interpreting (MUCCCI [mutʃɪ]), a small-scale multimodal corpus on the basis of eight authentic court hearings with Chinese-English interpreting in Mainland China. The corpus has approximately 92,500 word tokens in total. Besides the transcription of linguistic and para-linguistic features, utilizing the facial expression classification rules suggested by Black and Yacoob (1995), MUCCCI also includes approximately 1,200 annotations of facial expressions linked to the six basic types of human emotions, namely, anger, disgust, happiness, surprise, sadness, and fear (Black & Yacoob, 1995). This thesis is an example of conducting qualitative analysis on interpreter-mediated courtroom interactions through a multimodal corpus. In particular, miscommunication events (MEs) and the reasons behind them were investigated in detail. During the analysis, although queries were conducted based on non-verbal annotations when searching for MEs, both verbal and non-verbal features were considered indispensable parts contributing to the entire context. This thesis also includes a detailed description of the compilation process of MUCCCI utilizing ELAN, from data collection to transcription, POS tagging and non-verbal annotation. The research aims at assessing the possibility and feasibility of conducting qualitative analysis through a multimodal corpus of court interpreting. The concept of integrating both verbal and non-verbal features to contribute to the entire context is emphasized. The qualitative analysis focusing on MEs can provide an inspiration for improving court interpreters’ performances. All the constraints and difficulties presented can be regarded as a reference for similar research in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OutSystems Platform is used to develop, deploy, and maintain enterprise web an mobile web applications. Applications are developed through a visual domain specific language, in an integrated development environment, and compiled to a standard stack of web technologies. In the platform’s core, there is a compiler and a deployment service that transform the visual model into a running web application. As applications grow, compilation and deployment times increase as well, impacting the developer’s productivity. In the previous model, a full application was the only compilation and deployment unit. When the developer published an application, even if he only changed a very small aspect of it, the application would be fully compiled and deployed. Our goal is to reduce compilation and deployment times for the most common use case, in which the developer performs small changes to an application before compiling and deploying it. We modified the OutSystems Platform to support a new incremental compilation and deployment model that reuses previous computations as much as possible in order to improve performance. In our approach, the full application is broken down into smaller compilation and deployment units, increasing what can be cached and reused. We also observed that this finer model would benefit from a parallel execution model. Hereby, we created a task driven Scheduler that executes compilation and deployment tasks in parallel. Our benchmarks show a substantial improvement of the compilation and deployment process times for the aforementioned development scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The examinations taken by high-school graduates in Spain and the role ofthe examination in the university admissions process are described. Thefollowing issues arising in the assessment of the process are discussed:reliability of grading, comparability of the grades and scores(equating),maintenance of standards, and compilation and use of the grading process,and their integration in the operational grading are proposed. Variousschemes for score adjustment are reviewed and feasibility of theirimplementation discussed. The advantages of pretesting of items and ofempirical checks of experts' judgements are pointed out. The paperconcludes with an outline of a planned reorganisation of the highereducation in Spain, and with a call for a comprehensive programme ofempirical research concurrent with the operation of the examination andscoring system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article reflects on key methodological issues emerging from children and young people's involvement in data analysis processes. We outline a pragmatic framework illustrating different approaches to engaging children, using two case studies of children's experiences of participating in data analysis. The article highlights methods of engagement and important issues such as the balance of power between adults and children, training, support, ethical considerations, time and resources. We argue that involving children in data analysis processes can have several benefits, including enabling a greater understanding of children's perspectives and helping to prioritise children's agendas in policy and practice. (C) 2007 The Author(s). Journal compilation (C) 2007 National Children's Bureau.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The bathymetry raster with a resolution of 5 m x 5 m was processed from unpublished single beam data from the Argentine Antarctica Institute (IAA, 2010) and multibeam data from the United Kingdom Hydrographic Office (UKHO, 2012) with a cell size of 5 m x 5 m. A coastline digitized from a satellite image (DigitalGlobe, 2014) supplemented the interpolation process. The 'Topo to Raster' tool in ArcMap 10.3 was used to merge the three data sets, while the coastline represented the 0-m-contour to the interpolation process ('contour type option').

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concentrations of total organic carbon (TOC) were determined on samples collected during six cruises in the northern Arabian Sea during the 1995 US JGOFS Arabian Sea Process Study. Total organic carbon concentrations and integrated stocks in the upper ocean varied both spatially and seasonally. Highest mixed-layer TOC concentrations (80-100 µM C) were observed near the coast when upwelling was not active, while upwelling tended to reduce local concentrations. In the open ocean, highest mixed-layer TOC concentrations (80-95 µM C) developed in winter (period of the NE Monsoon) and remained through mid summer (early to mid-SW Monsoon). Lowest open ocean mixed-layer concentrations (65-75 µM C) occurred late in the summer (late SW Monsoon) and during the Fall Intermonsoon period. The changes in TOC concentrations resulted in seasonal variations in mean TOC stocks (upper 150 m) of 1.5-2 mole C/m**2, with the lowest stocks found late in the summer during the SW Monsoon-Fall Intermonsoon transition. The seasonal accumulation of TOC north of 15°N was 31-41 x 10**12 g C, mostly taking place over the period of the NE Monsoon, and equivalent to 6-8% of annual primary production estimated for that region in the mid-1970s. A net TOC production rate of 12 mmole C/m**2/d over the period of the NE Monsoon represented ~80% of net community production. Net TOC production was nil during the SW Monsoon, so vertical export would have dominated the export terms over that period. Total organic carbon concentrations varied in vertical profiles with the vertical layering of the water masses, with the Persian Gulf Water TOC concentrations showing a clear signal. Deep water (>2000 m) TOC concentrations were uniform across the basin and over the period of the cruises, averaging 42.3±1.4 µM C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contributions of total organic carbon and nitrogen to elemental cycling in the surface layer of the Sargasso Sea are evaluated using a 5-yr time-series data set (1994-1998). Surface-layer total organic carbon (TOC) and total organic nitrogen (TON) concentrations ranged from 60 to 70 µM C and 4 to 5.5 µM N seasonally, resulting in a mean C : N molar ratio of 14.4±2.2. The highest surface concentrations varied little during individual summer periods, indicating that net TOC production ceased during the highly oligotrophic summer season. Winter overturn and mixing of the water column were both the cause of concentration reductions and the trigger for net TOC production each year following nutrient entrainment and subsequent new production. The net production of TOC varied with the maximum in the winter mixed-layer depth (MLD), with greater mixing supporting the greatest net production of TOC. In winter 1995, the TOC stock increased by 1.4 mol C/m**2 in response to maximum mixing depths of 260 m. In subsequent years experiencing shallower maxima in MLD (<220 m), TOC stocks increased <0.7 mol C/m**2. Overturn of the water column served to export TOC to depth (>100 m), with the amount exported dependent on the depth of mixing (total export ranged from 0.4 to 1.4 mol C/m**2/yr). The exported TOC was comprised both of material resident in the surface layer during late summer (resident TOC) and material newly produced during the spring bloom period (fresh TOC). Export of resident TOC ranged from 0.5 to 0.8 mol C/m**2/yr, covarying with the maximum winter MLD. Export of fresh TOC varied from nil to 0.8 mol C/m**2/yr. Fresh TOC was exported only after a threshold maximum winter MLD of ~200 m was reached. In years with shallower mixing, fresh TOC export and net TOC production in the surface layer were greatly reduced. The decay rates of the exported TOC also covaried with maximum MLD. The year with deepest mixing resulted in the highest export and the highest decay rate (0.003 1/d) while shallow and low export resulted in low decay rates (0.0002 1/d), likely a consequence of the quality of material exported. The exported TOC supported oxygen utilization at dC : dO2 molar ratios ranging from 0.17 when TOC export was low to 0.47 when it was high. We estimate that exported TOC drove 15-41% of the annual oxygen utilization rates in the 100-400 m depth range. Finally, there was a lack of variability in the surface-layer TON signal during summer. The lack of a summer signal for net TON production suggests a small role for N2 fixation at the site. We hypothesize that if N2 fixation is responsible for elevated N : P ratios in the main thermocline of the Sargasso Sea, then the process must take place south of Bermuda and the signal transported north with the Gulf Stream system.