938 resultados para Domain-specific programming languages
Resumo:
Scopo di questo elaborato di tesi è la modellazione e l’implementazione di una estensione del simulatore Alchemist, denominata Biochemistry, che permetta di simulare un ambiente multi-cellulare. Al fine di simulare il maggior numero possibile di processi biologici, il simulatore dovrà consentire di modellare l’eterogeneità cellulare attraverso la modellazione di diversi aspetti dei sistemi cellulari, quali: reazioni intracellulari, segnalazione tra cellule adiacenti, giunzioni cellulari e movimento. Dovrà, inoltre, essere ammissibile anche l’esecuzione di azioni impossibili nel mondo reale, come la distruzione o la creazione dal nulla di molecole chimiche. In maniera più specifica si sono modellati ed implementati i seguenti processi biochimici: creazione e distruzione di molecole chimiche, reazioni biochimiche intracellulari, scambio di molecole tra cellule adiacenti, creazione e distruzione di giunzioni cellulari. È stata dunque posta particolare enfasi nella modellazione delle reazioni tra cellule vicine, il cui meccanismo è simile a quello usato nella segnalazione cellulare. Ogni parte del sistema è stata modellata seguendo fenomeni realmente presenti nei sistemi multi-cellulari, e documentati in letteratura. Per la specifica delle reazioni chimiche, date in ingresso alla simulazione, è stata necessaria l’implementazione di un Domain Specific Language (DSL) che consente la scrittura di reazioni in modo simile al linguaggio naturale, consentendo l’uso del simulatore anche a persone senza particolari conoscenze di biologia. La correttezza del progetto è stata validata tramite test compiuti con dati presenti in letteratura e inerenti a processi biologici noti e ampiamente studiati.
Resumo:
Este trabalho propõe o estudo comparativo do uso de infográficos multimídia pelos sites Clarín.com, da Argentina e Folha.com, do Brasil. A pesquisa tem como objetivo verificar e analisar como esses dois importantes veículos de comunicação online da América Latina têm utilizado a tecnologia HTML5 para avançar nas possibilidades interativas do gênero jornalístico. Para tanto, a análise comparada trata da infografia multimídia, que tem passado por profundas mudanças tecnológicas, alterando o formato e o conteúdo da notícia. Além da conceituação teórica e revisão de literatura sobre infografia, newsgame, narrativa transmídia, jornalismo online, interatividade e as linguagens de programação voltadas para a produção de infografia multimídia, o trabalho realizou análise comparativa das seções Infográficos, veiculada pela Folha.com, e Especiales Multimedia, do Clarín.com. O estudo, quantitativo e qualitativo, verificou os recursos narrativos e informativos, ferramentas e tecnologias de linguagem de programação para Internet que são empregadas pelos dois meios de comunicação, com base no modelo de análise proposto por Alberto Cairo em Infografia 2.0 – visualización interactiva de información en prensa. A pesquisa demonstrou que ainda que o Clarín.com tenha utilizado a tecnologia Flash na maioria dos infográficos multimídia analisados, os resultados da análise comparada mostram que os infográficos do jornal online argentino possibilitaram níveis mais elevados de interatividade do que os infográficos multimídia da Folha.com, desenvolvidos majoritariamente em HTML5.
Resumo:
Pour être performant au plus haut niveau, les athlètes doivent posséder une capacité perceptivo-cognitive supérieure à la moyenne. Cette faculté, reflétée sur le terrain par la vision et l’intelligence de jeu des sportifs, permet d’extraire l’information clé de la scène visuelle. La science du sport a depuis longtemps observé l’expertise perceptivo-cognitive au sein de l’environnement sportif propre aux athlètes. Récemment, des études ont rapporté que l’expertise pouvait également se refléter hors de ce contexte, lors d’activités du quotidien par exemple. De plus, les récentes théories entourant la capacité plastique du cerveau ont amené les chercheurs à développer des outils pour entraîner les capacités perceptivo-cognitives des athlètes afin de les rendre plus performants sur le terrain. Ces méthodes sont la plupart du temps contextuelles à la discipline visée. Cependant, un nouvel outil d’entraînement perceptivo-cognitif, nommé 3-Dimensional Multiple Object Tracking (3D-MOT) et dénué de contexte sportif, a récemment vu le jour et a fait l’objet de nos recherches. Un de nos objectifs visait à mettre en évidence l’expertise perceptivo-cognitive spécifique et non-spécifique chez des athlètes lors d’une même étude. Nous avons évalué la perception du mouvement biologique chez des joueurs de soccer et des non-athlètes dans une salle de réalité virtuelle. Les sportifs étaient systématiquement plus performants en termes d’efficacité et de temps de réaction que les novices pour discriminer la direction du mouvement biologique lors d’un exercice spécifique de soccer (tir) mais également lors d’une action issue du quotidien (marche). Ces résultats signifient que les athlètes possèdent une meilleure capacité à percevoir les mouvements biologiques humains effectués par les autres. La pratique du soccer semble donc conférer un avantage fondamental qui va au-delà des fonctions spécifiques à la pratique d’un sport. Ces découvertes sont à mettre en parallèle avec la performance exceptionnelle des athlètes dans le traitement de scènes visuelles dynamiques et également dénuées de contexte sportif. Des joueurs de soccer ont surpassé des novices dans le test de 3D-MOT qui consiste à suivre des cibles en mouvement et stimule les capacités perceptivo-cognitives. Leur vitesse de suivi visuel ainsi que leur faculté d’apprentissage étaient supérieures. Ces résultats confirmaient des données obtenues précédemment chez des sportifs. Le 3D-MOT est un test de poursuite attentionnelle qui stimule le traitement actif de l’information visuelle dynamique. En particulier, l’attention sélective, dynamique et soutenue ainsi que la mémoire de travail. Cet outil peut être utilisé pour entraîner les fonctions perceptivo-cognitives des athlètes. Des joueurs de soccer entraînés au 3D-MOT durant 30 sessions ont montré une amélioration de la prise de décision dans les passes de 15% sur le terrain comparés à des joueurs de groupes contrôles. Ces données démontrent pour la première fois un transfert perceptivo-cognitif du laboratoire au terrain suivant un entraînement perceptivo-cognitif non-contextuel au sport de l’athlète ciblé. Nos recherches aident à comprendre l’expertise des athlètes par l’approche spécifique et non-spécifique et présentent également les outils d’entraînements perceptivo-cognitifs, en particulier le 3D-MOT, pour améliorer la performance dans le sport de haut-niveau.
Resumo:
Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.
Resumo:
Research endeavors on spoken dialogue systems in the 1990s and 2000s have led to the deployment of commercial spoken dialogue systems (SDS) in microdomains such as customer service automation, reservation/booking and question answering systems. Recent research in SDS has been focused on the development of applications in different domains (e.g. virtual counseling, personal coaches, social companions) which requires more sophistication than the previous generation of commercial SDS. The focus of this research project is the delivery of behavior change interventions based on the brief intervention counseling style via spoken dialogue systems. Brief interventions (BI) are evidence-based, short, well structured, one-on-one counseling sessions. Many challenges are involved in delivering BIs to people in need, such as finding the time to administer them in busy doctors' offices, obtaining the extra training that helps staff become comfortable providing these interventions, and managing the cost of delivering the interventions. Fortunately, recent developments in spoken dialogue systems make the development of systems that can deliver brief interventions possible. The overall objective of this research is to develop a data-driven, adaptable dialogue system for brief interventions for problematic drinking behavior, based on reinforcement learning methods. The implications of this research project includes, but are not limited to, assessing the feasibility of delivering structured brief health interventions with a data-driven spoken dialogue system. Furthermore, while the experimental system focuses on harmful alcohol drinking as a target behavior in this project, the produced knowledge and experience may also lead to implementation of similarly structured health interventions and assessments other than the alcohol domain (e.g. obesity, drug use, lack of exercise), using statistical machine learning approaches. In addition to designing a dialog system, the semantic and emotional meanings of user utterances have high impact on interaction. To perform domain specific reasoning and recognize concepts in user utterances, a named-entity recognizer and an ontology are designed and evaluated. To understand affective information conveyed through text, lexicons and sentiment analysis module are developed and tested.
Resumo:
This paper focuses on two basic issues: the anxiety-generating nature of the interpreting task and the relevance of interpreter trainees’ academic self-concept. The first has already been acknowledged, although not extensively researched, in several papers, and the second has only been mentioned briefly in interpreting literature. This study seeks to examine the relationship between the anxiety and academic self-concept constructs among interpreter trainees. An adapted version of the Foreign Language Anxiety Scale (Horwitz et al., 1986), the Academic Autoconcept Scale (Schmidt, Messoulam & Molina, 2008) and a background information questionnaire were used to collect data. Students’ t-Test analysis results indicated that female students reported experiencing significantly higher levels of anxiety than male students. No significant gender difference in self-concept levels was found. Correlation analysis results suggested, on the one hand, that younger would-be interpreters suffered from higher anxiety levels and students with higher marks tended to have lower anxiety levels; and, on the other hand, that younger students had lower self-concept levels and higher-ability students held higher self-concept levels. In addition, the results revealed that students with higher anxiety levels tended to have lower self-concept levels. Based on these findings, recommendations for interpreting pedagogy are discussed.
Resumo:
The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.
Resumo:
The objective of this paper is to perform a quantitative comparison of Dweet.io and SensibleThings from different aspects. With the fast development of internet of things, the platforms for internet-of-things face bigger challenges. This paper will evaluate both systems in four parts. The first part shows the general comparison of input ways and output functions provided by the platforms. The second part shows the security comparison, which focuses on the protocol types of the packets and the stability during the communication. The third part shows the scalability comparison when the value becomes bigger. The fourth part shows the scalability comparison when speeding up the processes. After the comparisons, I concluded that Dweet.io is more easy to use on devices and supports more programming languages. Dweet.io realizes visualization and it can be shared. Dweet.io is safer and more stable than SensibleThings. SensibleThings provides more openness. SensibleThings has better scalability in handling big values and quick speed.
Resumo:
MAIDL, André Murbach; CARVILHE, Claudio; MUSICANTE, Martin A. Maude Object-Oriented Action Tool. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2008.
Resumo:
Ausgehend von einer allgemeinen Definition beschreibt der Beitrag zentrale Merkmale und Anforderungen Kompetenzorientierten Unterrichts sowie dessen bildungspolitische und lerntheoretische Hintergründe. Diese allgemeindidaktische Perspektive wird mit den Bedingungen und Zielsetzungen im Lernbereich Globale Entwicklung verknüpft. Die Möglichkeiten und Herausforderungen, die sich bei der Umsetzung Kompetenzorientierten Unterrichts zeigen, werden anhand eines Unterrichtsbeispiels aus dem Lernbereich Globale Entwicklung ausgeführt. Hierbei liegt der Fokus auf komplexen Problemen als Ausgangspunkt, auf der Ermöglichung von Selbststeuerung durch die Schüler/-innen sowie auf der Erarbeitung reichhaltiger Ergebnisse, die eine sinnhafte Kommunikation und Vernetzung von Wissen erfordern. (DIPF/Orig.)
Resumo:
Within the public sector great change efforts are currently made to meet future challenges. In the area of health care, change initiatives are implemented to enhance quality and efficiency. To this end, a lean change programme is being widely introduced in Sweden as well as internationally. The overriding aim of this study is to increase knowledge of what happens when change programmes, CP, such as lean are implemented in a healthcare organisation, HCO. Previous research has shown that the main obstacle to implementing CP in HCO:s is their complexity. However, the complexity has often been reduced, as different factors such as management, professions, organisation and control have been studied separately. To fully capture the complexity of the HCO the Actor Network Theory, ANT, was used in this study. In line with ANT, introducing lean can be described in terms of a translation process in which human and non-human actors are woven into a network. This approach allows for the incorporation of various factors in the study of a change process in a complex organisation. Drawing on ANT, this thesis explores how network constructions enable or impede change programmes. The approach is based on ethnographic monitoring of the implementation of lean in the Värmland county council public healthcare organisation. As a result of the holistic perspective, the study provides detailed descriptions of how complexity impacts on the implementation. It displays the relations enabling or impeding the implementation of CP and the methods actors use to establish and defend the relations. The contribution of the study is threefold. Empirically, the study monitors a HCO aiming to implement full-scale lean as philosophy, principle and tool. Methodologically, the study evaluates ANT as a methodological theory to study CP in a HCO. Finally, the domain-specific contribution of the study is its identification of the relations and methods that impact on lean deployment.
Resumo:
Exceptions are an important feature of modern programming languages, but their compilation has traditionally been viewed as an advanced topic. In this article we show that the basic method of compiling exceptions using stack unwinding can be explained and verified both simply and precisely, using elementary functional programming techniques. In particular, we develop a compiler for a small language with exceptions, together with a proof of its correctness.
Resumo:
MAIDL, André Murbach; CARVILHE, Claudio; MUSICANTE, Martin A. Maude Object-Oriented Action Tool. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2008.
Resumo:
The PROSPER (Proof and Specification Assisted Design Environments) project advocates the use of toolkits which allow existing verification tools to be adapted to a more flexible format so that they may be treated as components. A system incorporating such tools becomes another component that can be embedded in an application. This paper describes the PROSPER Toolkit which enables this. The nature of communication between components is specified in a language-independent way. It is implemented in several common programming languages to allow a wide variety of tools to have access to the toolkit.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.