942 resultados para software, translation, validation tool, VMNET, Wikipedia, XML


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes and validates a model-driven software engineering technique for spreadsheets. The technique that we envision builds on the embedding of spreadsheet models under a widely used spreadsheet system. This means that we enable the creation and evolution of spreadsheet models under a spreadsheet system. More precisely, we embed ClassSheets, a visual language with a syntax similar to the one offered by common spreadsheets, that was created with the aim of specifying spreadsheets. Our embedding allows models and their conforming instances to be developed under the same environment. In practice, this convenient environment enhances evolution steps at the model level while the corresponding instance is automatically co-evolved.Finally,wehave designed and conducted an empirical study with human users in order to assess our technique in production environments. The results of this study are promising and suggest that productivity gains are realizable under our model-driven spreadsheet development setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Dataflow programs are widely used. Each program is a directed graph where nodes are computations and edges indicate the flow of data. In prior work, we reverse-engineered legacy dataflow programs by deriving their optimized implementations from a simple specification graph using graph transformations called refinements and optimizations. In MDE-speak, our derivations were PIM-to-PSM mappings. In this paper, we show how extensions complement refinements, optimizations, and PIM-to-PSM derivations to make the process of reverse engineering complex legacy dataflow programs tractable. We explain how optional functionality in transformations can be encoded, thereby enabling us to encode product lines of transformations as well as product lines of dataflow programs. We describe the implementation of extensions in the ReFlO tool and present two non-trivial case studies as evidence of our work’s generality

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Endometriosis Health Profile Questionnaire-30 is currently the most used questionnaire for quality of life measurement in women with endometriosis. The aim of this study is to evaluate the psychometric properties and to validate the Portuguese Endometriosis Health Profile Questionnaire-30 version. MATERIAL AND METHODS A sequential sample of 152 patients with endometriosis, followed in a Portugal reference center, were asked to complete a questionnaire on social and demographic features, the Portuguese version of the Endometriosis Health Profile Questionnaire-30 and of the Short Form Health Survey 36 Item â version 2. Appropriate statistical analysis was performed using descriptive statistics, factor analysis, internal consistency, item-total correlation and convergent validity. RESULTS Factorial analysis confirmed the validity of the five-dimension structure of the Endometriosis Health Profile Questionnaire-30 core questionnaire, which explained 83.2% of the total variance. All item-total correlations presented acceptable results and high internal consistency, with Cronbach's alpha ranging between 0.876 and 0.981 for the core questionnaire and between 0.863 and 0.951 for the modular questionnaire. Significant negative associations between similar scales of Endometriosis Health Profile Questionnaire-30 and Short Form Health Survey 36 Item â version 2 were demonstrated. Data completeness achieved was high for all dimensions. The emotional well-being scale in the core questionnaire and the infertility scale in the modular section had the highest median scores, and therefore the most negative impact on the quality of life of participating women. DISCUSSION The test-retest reliability and responsiveness of the questionnaire should be evaluated in future studies. CONCLUSION The present study demonstrates that the Portuguese version of the Endometriosis Health Profile Questionnaire-30 is a valid, reliable and acceptable tool for evaluating the health-related quality of life of Portuguese women with endometriosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural mineral waters (still), effervescent natural mineral waters (sparkling) and aromatized waters with fruit-flavors (still or sparkling) are an emerging market. In this work, the capability of a potentiometric electronic tongue, comprised with lipid polymeric membranes, to quantitatively estimate routinely quality physicochemical parameters (pH and conductivity) as well as to qualitatively classify water samples according to the type of water was evaluated. The study showed that a linear discriminant model, based on 21 sensors selected by the simulated annealing algorithm, could correctly classify 100 % of the water samples (leave-one out cross-validation). This potential was further demonstrated by applying a repeated K-fold cross-validation (guaranteeing that at least 15 % of independent samples were only used for internal-validation) for which 96 % of correct classifications were attained. The satisfactory recognition performance of the E-tongue could be attributed to the pH, conductivity, sugars and organic acids contents of the studied waters, which turned out in significant differences of sweetness perception indexes and total acid flavor. Moreover, the E-tongue combined with multivariate linear regression models, based on sub-sets of sensors selected by the simulated annealing algorithm, could accurately estimate waters pH (25 sensors: R 2 equal to 0.99 and 0.97 for leave-one-out or repeated K-folds cross-validation) and conductivity (23 sensors: R 2 equal to 0.997 and 0.99 for leave-one-out or repeated K-folds cross-validation). So, the overall satisfactory results achieved, allow envisaging a potential future application of electronic tongue devices for bottled water analysis and classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identificación y caracterización del problema. Uno de los problemas más importantes asociados con la construcción de software es la corrección del mismo. En busca de proveer garantías del correcto funcionamiento del software, han surgido una variedad de técnicas de desarrollo con sólidas bases matemáticas y lógicas conocidas como métodos formales. Debido a su naturaleza, la aplicación de métodos formales requiere gran experiencia y conocimientos, sobre todo en lo concerniente a matemáticas y lógica, por lo cual su aplicación resulta costosa en la práctica. Esto ha provocado que su principal aplicación se limite a sistemas críticos, es decir, sistemas cuyo mal funcionamiento puede causar daños de magnitud, aunque los beneficios que sus técnicas proveen son relevantes a todo tipo de software. Poder trasladar los beneficios de los métodos formales a contextos de desarrollo de software más amplios que los sistemas críticos tendría un alto impacto en la productividad en tales contextos. Hipótesis. Contar con herramientas de análisis automático es un elemento de gran importancia. Ejemplos de esto son varias herramientas potentes de análisis basadas en métodos formales, cuya aplicación apunta directamente a código fuente. En la amplia mayoría de estas herramientas, la brecha entre las nociones a las cuales están acostumbrados los desarrolladores y aquellas necesarias para la aplicación de estas herramientas de análisis formal sigue siendo demasiado amplia. Muchas herramientas utilizan lenguajes de aserciones que escapan a los conocimientos y las costumbres usuales de los desarrolladores. Además, en muchos casos la salida brindada por la herramienta de análisis requiere cierto manejo del método formal subyacente. Este problema puede aliviarse mediante la producción de herramientas adecuadas. Otro problema intrínseco a las técnicas automáticas de análisis es cómo se comportan las mismas a medida que el tamaño y complejidad de los elementos a analizar crece (escalabilidad). Esta limitación es ampliamente conocida y es considerada crítica en la aplicabilidad de métodos formales de análisis en la práctica. Una forma de atacar este problema es el aprovechamiento de información y características de dominios específicos de aplicación. Planteo de objetivos. Este proyecto apunta a la construcción de herramientas de análisis formal para contribuir a la calidad, en cuanto a su corrección funcional, de especificaciones, modelos o código, en el contexto del desarrollo de software. Más precisamente, se busca, por un lado, identificar ambientes específicos en los cuales ciertas técnicas de análisis automático, como el análisis basado en SMT o SAT solving, o el model checking, puedan llevarse a niveles de escalabilidad superiores a los conocidos para estas técnicas en ámbitos generales. Se intentará implementar las adaptaciones a las técnicas elegidas en herramientas que permitan su uso a desarrolladores familiarizados con el contexto de aplicación, pero no necesariamente conocedores de los métodos o técnicas subyacentes. Materiales y métodos a utilizar. Los materiales a emplear serán bibliografía relevante al área y equipamiento informático. Métodos. Se emplearán los métodos propios de la matemática discreta, la lógica y la ingeniería de software. Resultados esperados. Uno de los resultados esperados del proyecto es la individualización de ámbitos específicos de aplicación de métodos formales de análisis. Se espera que como resultado del desarrollo del proyecto surjan herramientas de análisis cuyo nivel de usabilidad sea adecuado para su aplicación por parte de desarrolladores sin formación específica en los métodos formales utilizados. Importancia del proyecto. El principal impacto de este proyecto será la contribución a la aplicación práctica de técnicas formales de análisis en diferentes etapas del desarrollo de software, con la finalidad de incrementar su calidad y confiabilidad. A crucial factor for software quality is correcteness. Traditionally, formal approaches to software development concentrate on functional correctness, and tackle this problem basically by being based on well defined notations founded on solid mathematical grounds. This makes formal methods better suited for analysis, due to their precise semantics, but they are usually more complex, and require familiarity and experience with the manipulation of mathematical definitions. So, their acceptance by software engineers is rather restricted, and formal methods applications have been confined to critical systems. Nevertheless, it is obvious that the advantages that formal methods provide apply to any kind of software system. It is accepted that appropriate software tool support for formal analysis is essential, if one seeks providing support for software development based on formal methods. Indeed, some of the relatively recent sucesses of formal methods are accompanied by good quality tools that automate powerful analysis mechanisms, and are even integrated in widely used development environments. Still, most of these tools either concentrate on code analysis, and in many cases are still far from being simple enough to be employed by software engineers without experience in formal methods. Another important problem for the adoption of tool support for formal methods is scalability. Automated software analysis is intrinsically complex, and thus techniques do not scale well in the general case. In this project, we will attempt to identify particular modelling, design, specification or coding activities in software development processes where to apply automated formal analysis techniques. By focusing in very specific application domains, we expect to find characteristics that might be exploited to increase the scalability of the corresponding analyses, compared to the general case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background: The Walking Estimated-Limitation Calculated by History (WELCH) questionnaire has been proposed to evaluate walking impairment in patients with intermittent claudication (IC), presenting satisfactory psychometric properties. However, a Brazilian Portuguese version of the questionnaire is unavailable, limiting its application in Brazilian patients. Objective: To analyze the psychometric properties of a translated Brazilian Portuguese version of the WELCH in Brazilian patients with IC. Methods: Eighty-four patients with IC participated in the study. After translation and back-translation, carried out by two independent translators, the concurrent validity of the WELCH was analyzed by correlating the questionnaire scores with the walking capacity assessed with the Gardner treadmill test. To determine the reliability of the WELCH, internal consistency and test–retest reliability with a seven-day interval between the two questionnaire applications were calculated. Results: There were significant correlations between the WELCH score and the claudication onset distance (r = 0.64, p = 0.01) and total walking distance (r = 0.61, p = 0.01). The internal consistency was 0.84 and the intraclass correlation coefficient between questionnaire evaluations was 0.84. There were no differences in WELCH scores between the two questionnaire applications. Conclusion: The Brazilian Portuguese version of the WELCH presents adequate validity and reliability indicators, which support its application to Brazilian patients with IC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de mestrado em Biologia Humana e Ambiente, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquesta memoria resumeix el treball de final de carrera d’Enginyeria Superior d’Informàtica. Explicarà les principals raons que han motivat el projecte així com exemples que il·lustren l’aplicació resultant. En aquest cas el software intentarà resoldre la actual necessitat que hi ha de tenir dades de Ground Truth per als algoritmes de segmentació de text per imatges de color complexes. Tots els procesos seran explicats en els diferents capítols partint de la definició del problema, la planificació, els requeriments i el disseny fins a completar la il·lustració dels resultats del programa i les dades de Ground Truth resultants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este proyecto presenta el desarrollo de una aplicación que permite traducir Redes de Petri Coloreadas diseñadas en CPN Tools a un lenguaje para la generación de ficheros de entrada a un simulador/optimizador de Redes de Petri Coloreadas. De esta manera se podrán optimizar modelos creados en CPN Tools ya que esta herramienta no facilita la optimización. Todo el proyecto se ha realizado en C++.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En aquesta memòria s’explica el desenvolupament d’una eina útil que permet a l’usuari visualitzar en l’aplicació Google Maps les dades de posicionament captats en una sessió GPS. En aquest projecte, hem dissenyat una aplicació Web en la qual recollim les dades ingressades per l’usuari mitjançant un formulari. Un cop emmagatzemades aquestes dades en el servidor, la nostra eina hi executa l’aplicació encarregada del càlcul de les posicions. Aquesta és un script desenvolupat en MATLAB, que s’encarrega d’interpretar les dades subministrades per l’usuari, amb les quals es poden calcular les coordenades captades pel receptor GPS. Una vegada calculades, el software les emmagatzema en el servidor, en un arxiu .xml, que serà el que posteriorment interpretarà Google Maps gràcies al seu API. D’aquesta manera, l’usuari obtindrà el resultat visual de la sessió GPS que hagi decidit carregar sense necessitat des disposar de cap software específic per a la interpretació i el càlcul de les dades que hi ha capturat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PADICAT is the web archive created in 2005 in Catalonia (Spain ) by the Library of Catalonia (BC ) , the National Library of Catalonia , with the aim of collecting , processing and providing permanent access to the digital heritage of Catalonia . Its harvesting strategy is based on the hybrid model ( of massive harvesting . SPA top level domain ; selective compilation of the web site output of Catalan organizations; focused harvesting of public events) . The system provides open access to the whole collection , on the Internet . We consider necessary to complement the current search for new and visualization software with open source software tool, CAT ( Curator Archiving Tool) , composed by three modules aimed to effectively managing the processes of human cataloguing ; to publish directories where the digital resources and special collections ; and to offer statistical information of added value to end users. Within the framework of the International Internet Preservation Consortium meeting ( Vienna 2010) , the progress in the development of this new tool, and the philosophy that has motivated his design, are presented to the international community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquest treball de recerca tracta de la relació existent entre pedagogia, traducció, llengües estrangeres i intel•ligències múltiples. El debat sobre si la traducció és una eina útil a la classe de llengües estrangeres és un tema actual sobre el qual molts investigadors encara indaguen. Estudis recents, però, han demostrat que qualsevol tasca de traducció -en la qual s’hi poden incloure treballs amb les diferents habilitats- és profitosa si la considerem un mitjà, no una finalitat en ella mateixa. Evidentment, l’ús de la traducció dins l’aula és avantatjosa, però també hem de tenir presents certs desavantatges d’aquesta aplicació. Un possible desavantatge podria ser la creença que, al principi, molta gent té referent a l’equivalència, paraula per paraula, d’una llengua vers una altra. Però després de presentar vàries tasques de traducció als estudiants, aquests poden arribar a controlar, fins i tot, les traduccions inconscients i poden assolir un cert nivell de precisió i flexibilitat que val la pena mencionar. Però l’avantatge principal és que s’enfronten a una activitat molt estesa dins la societat actual que combina dues llengües, la llengua materna i la llengua objecte d’estudi, per exemple. De tot això en podem deduir que utilitzar la llengua materna a la classe no s’ha de considerar un crim, com fins ara, sinó una virtut, evidentment si és emprada correctament. En aquest treball de recerca s’hi pot trobar una síntesi tant de les principals teories d’adquisició i aprenentatge de llengües com de les teories de traducció. A la pregunta de si les teories, tant de traducció com de llengües estrangeres, s’haurien d’ensenyar implícita o explícitament, es pot inferir que segons el nivell d’estudis on estiguin els aprenents els convindrà aprendre les teories explícitament o les aprendran, de totes maneres, implícitament. Com que qualsevol grup d’estudiants és heterogeni -és a dir que cada individu té un ritme i un nivell d’aprenentatge concret i sobretot cadascú té diferents estils de percepció (visual, auditiu, gustatiu, olfactiu, de moviment) i per tant diferents intel•ligències-, els professors ho han de tenir en compte a l’hora de planificar qualsevol programa d’actuació vers els alumnes. Per tant, podem concloure que les tasques o projectes de traducció poden ajudar als alumnes a aprendre millor, més eficaçment i a aconseguir un aprenentatge més significatiu.