842 resultados para Software-based techniques


Relevância:

90.00% 90.00%

Publicador:

Resumo:

El presente proyecto fin de carrera, realizado por el ingeniero técnico en telecomunicaciones Pedro M. Matamala Lucas, es la fase final de desarrollo de un proyecto de mayor magnitud correspondiente al software de vídeo forense SAVID. El propósito del proyecto en su totalidad es la creación de una herramienta informática capacitada para realizar el análisis de ficheros de vídeo, codificados y comprimidos por el sistema DV –Digital Video-. El objetivo del análisis, es aportar información acerca de si la cinta magnética presenta indicios de haber sido manipulada con una edición posterior a su grabación original, además, de mostrar al usuario otros datos de interés como las especificaciones técnicas de la señal de vídeo y audio. Por lo tanto, se facilitará al usuario, analista de vídeo forense, información que le ayude a valorar la originalidad del contenido del soporte que es sujeto del análisis. El objetivo específico de esta fase final, es la creación de la interfaz de usuario del software, que informa tanto del código binario de los sectores significativos, como de su interpretación tras el análisis. También permitirá al usuario el reporte de los resultados, además de otras funcionalidades que le permitan la navegación por los sectores del código que han sido modificados como efecto colateral de la edición de la cinta magnética original. Otro objetivo importante del proyecto ha sido la investigación de metodologías y técnicas de desarrollo de software para su posterior implementación, buscando con esto, una mayor eficiencia en la gestión del tiempo y una mayor calidad de software con el fin de garantizar su evolución y sostenibilidad en el futuro. Se ha hecho hincapié en las metodologías ágiles que han ido ganando relevancia en el sector de las tecnologías de la información en las últimas décadas, sustituyendo a metodologías clásicas como el desarrollo en cascada. Su flexibilidad durante el ciclo de vida del software, permite obtener mejores resultados cuando las especificaciones no están del todo definidas, ajustándose de este modo a las condiciones del proyecto. Resumiendo las especificaciones técnicas del software, C++ es el lenguaje de programación orientado a objetos con el que se ha desarrollado, utilizándose la tecnología MFC -Microsoft Foundation Classes- para la implementación. Es un proyecto MFC de tipo cuadro de dialogo,creado, compilado y publicado, con la herramienta de desarrollo integrado Microsoft Visual Studio 2010. La arquitectura con la que se ha estructurado es la arquetípica de tres capas, compuesta por la interfaz de usuario, capa de negocio y capa de acceso a datos. Se ha visto necesario configurar el proyecto con compatibilidad con CLR –Common Languages Runtime- para poder implementar la funcionalidad de creación de reportes. Acompañando a la aplicación informática, se presenta la memoria del proyecto y sus anexos correspondientes a los documentos EDRF –Especificaciones Detalladas de Requisitos funcionales-, EIU –Especificaciones de Interfaz de Usuario , DT -Diseño Técnico- y Guía de Usuario. SUMMARY. This dissertation, carried out by the telecommunications engineer Pedro M. Matamala Lucas, is in its final stage and is part of a larger project for the software of forensic video called SAVID. The purpose of the entire project is the creation of a software tool capable of analyzing video files that are coded and compressed by the DV -Digital Video- System. The objective of the analysis is to provide information on whether the magnetic tape shows signs of having been tampered with after the editing of the original recording, and also to show the user other relevant data and technical specifications of the video signal and audio. Therefore the user, forensic video analyst, will have information to help assess the originality of the content of the media that is subject to analysis. The specific objective of this final phase is the creation of the user interface of the software that provides information about the binary code of the significant sectors and also its interpretation after analysis. It will also allow the user to report the results, and other features that will allow browsing through the sections of the code that have been modified as a secondary effect of the original magnetic tape being tampered. Another important objective of the project is the investigation of methodologies and software development techniques to be used in deployment, with the aim of greater efficiency in time management and enhanced software quality in order to ensure its development and maintenance in the future. Agile methodologies, which have become important in the field of information technology in recent decades, have been used during the execution of the project, replacing classical methodologies such as Waterfall Development. The flexibility, as the result of using by agile methodologies, during the software life cycle, produces better results when the specifications are not fully defined, thus conforming to the initial conditions of the project. Summarizing the software technical specifications, C + + the programming language – which is object oriented and has been developed using technology MFC- Microsoft Foundation Classes for implementation. It is a project type dialog box, created, compiled and released with the integrated development tool Microsoft Visual Studio 2010. The architecture is structured in three layers: the user interface, business layer and data access layer. It has been necessary to configure the project with the support CLR -Common Languages Runtime – in order to implement the reporting functionality. The software application is submitted with the project report and its annexes to the following documents: Functional Requirements Specifications - Detailed User Interface Specifications, Technical Design and User Guide.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En el presente documento se hablará acerca del desarrollo de un proyecto para la mejora de un programa de análisis de señales; con ese fin, se hará uso de técnicas de optimización del software y de tecnologías de aceleración, mediante el aprovechamiento del paralelismo del programa. Además se hará un análisis de acerca del uso de dos tecnologías basadas en diferentes paradigmas de programación paralela; una mediante múltiples hilos con memoria compartida y la otra mediante el uso de GPUs como dispositivos de coprocesamiento. This paper will talk about the development of a Project to improve a program that does signals analysis; to that end, it will make use of software optimization techniques and acceleration technologies by exploiting parallelism in the program. In Addition will be done an analysis on the use of two technologies based on two different paradigms; one using multiple threads with shared memory and the other using GPU as co-processing devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Antecedentes Europa vive una situación insostenible. Desde el 2008 se han reducido los recursos de los gobiernos a raíz de la crisis económica. El continente Europeo envejece con ritmo constante al punto que se prevé que en 2050 habrá sólo dos trabajadores por jubilado [54]. A esta situación se le añade el aumento de la incidencia de las enfermedades crónicas, relacionadas con el envejecimiento, cuyo coste puede alcanzar el 7% del PIB de un país [51]. Es necesario un cambio de paradigma. Una nueva manera de cuidar de la salud de las personas: sustentable, eficaz y preventiva más que curativa. Algunos estudios abogan por el cuidado personalizado de la salud (pHealth). En este modelo las prácticas médicas son adaptadas e individualizadas al paciente, desde la detección de los factores de riesgo hasta la personalización de los tratamientos basada en la respuesta del individuo [81]. El cuidado personalizado de la salud está asociado a menudo al uso de las tecnologías de la información y comunicación (TICs) que, con su desarrollo exponencial, ofrecen oportunidades interesantes para la mejora de la salud. El cambio de paradigma hacia el pHealth está lentamente ocurriendo, tanto en el ámbito de la investigación como en la industria, pero todavía no de manera significativa. Existen todavía muchas barreras relacionadas a la economía, a la política y la cultura. También existen barreras puramente tecnológicas, como la falta de sistemas de información interoperables [199]. A pesar de que los aspectos de interoperabilidad están evolucionando, todavía hace falta un diseño de referencia especialmente direccionado a la implementación y el despliegue en gran escala de sistemas basados en pHealth. La presente Tesis representa un intento de organizar la disciplina de la aplicación de las TICs al cuidado personalizado de la salud en un modelo de referencia, que permita la creación de plataformas de desarrollo de software para simplificar tareas comunes de desarrollo en este dominio. Preguntas de investigación RQ1 >Es posible definir un modelo, basado en técnicas de ingeniería del software, que represente el dominio del cuidado personalizado de la salud de una forma abstracta y representativa? RQ2 >Es posible construir una plataforma de desarrollo basada en este modelo? RQ3 >Esta plataforma ayuda a los desarrolladores a crear sistemas pHealth complejos e integrados? Métodos Para la descripción del modelo se adoptó el estándar ISO/IEC/IEEE 42010por ser lo suficientemente general y abstracto para el amplio enfoque de esta tesis [25]. El modelo está definido en varias partes: un modelo conceptual, expresado a través de mapas conceptuales que representan las partes interesadas (stakeholders), los artefactos y la información compartida; y escenarios y casos de uso para la descripción de sus funcionalidades. El modelo fue desarrollado de acuerdo a la información obtenida del análisis de la literatura, incluyendo 7 informes industriales y científicos, 9 estándares, 10 artículos en conferencias, 37 artículos en revistas, 25 páginas web y 5 libros. Basándose en el modelo se definieron los requisitos para la creación de la plataforma de desarrollo, enriquecidos por otros requisitos recolectados a través de una encuesta realizada a 11 ingenieros con experiencia en la rama. Para el desarrollo de la plataforma, se adoptó la metodología de integración continua [74] que permitió ejecutar tests automáticos en un servidor y también desplegar aplicaciones en una página web. En cuanto a la metodología utilizada para la validación se adoptó un marco para la formulación de teorías en la ingeniería del software [181]. Esto requiere el desarrollo de modelos y proposiciones que han de ser validados dentro de un ámbito de investigación definido, y que sirvan para guiar al investigador en la búsqueda de la evidencia necesaria para justificarla. La validación del modelo fue desarrollada mediante una encuesta online en tres rondas con un número creciente de invitados. El cuestionario fue enviado a 134 contactos y distribuido en algunos canales públicos como listas de correo y redes sociales. El objetivo era evaluar la legibilidad del modelo, su nivel de cobertura del dominio y su potencial utilidad en el diseño de sistemas derivados. El cuestionario incluía preguntas cuantitativas de tipo Likert y campos para recolección de comentarios. La plataforma de desarrollo fue validada en dos etapas. En la primera etapa se utilizó la plataforma en un experimento a pequeña escala, que consistió en una sesión de entrenamiento de 12 horas en la que 4 desarrolladores tuvieron que desarrollar algunos casos de uso y reunirse en un grupo focal para discutir su uso. La segunda etapa se realizó durante los tests de un proyecto en gran escala llamado HeartCycle [160]. En este proyecto un equipo de diseñadores y programadores desarrollaron tres aplicaciones en el campo de las enfermedades cardio-vasculares. Una de estas aplicaciones fue testeada en un ensayo clínico con pacientes reales. Al analizar el proyecto, el equipo de desarrollo se reunió en un grupo focal para identificar las ventajas y desventajas de la plataforma y su utilidad. Resultados Por lo que concierne el modelo que describe el dominio del pHealth, la parte conceptual incluye una descripción de los roles principales y las preocupaciones de los participantes, un modelo de los artefactos TIC que se usan comúnmente y un modelo para representar los datos típicos que son necesarios formalizar e intercambiar entre sistemas basados en pHealth. El modelo funcional incluye un conjunto de 18 escenarios, repartidos en: punto de vista de la persona asistida, punto de vista del cuidador, punto de vista del desarrollador, punto de vista de los proveedores de tecnologías y punto de vista de las autoridades; y un conjunto de 52 casos de uso repartidos en 6 categorías: actividades de la persona asistida, reacciones del sistema, actividades del cuidador, \engagement" del usuario, actividades del desarrollador y actividades de despliegue. Como resultado del cuestionario de validación del modelo, un total de 65 personas revisó el modelo proporcionando su nivel de acuerdo con las dimensiones evaluadas y un total de 248 comentarios sobre cómo mejorar el modelo. Los conocimientos de los participantes variaban desde la ingeniería del software (70%) hasta las especialidades médicas (15%), con declarado interés en eHealth (24%), mHealth (16%), Ambient Assisted Living (21%), medicina personalizada (5%), sistemas basados en pHealth (15%), informática médica (10%) e ingeniería biomédica (8%) con una media de 7.25_4.99 años de experiencia en estas áreas. Los resultados de la encuesta muestran que los expertos contactados consideran el modelo fácil de leer (media de 1.89_0.79 siendo 1 el valor más favorable y 5 el peor), suficientemente abstracto (1.99_0.88) y formal (2.13_0.77), con una cobertura suficiente del dominio (2.26_0.95), útil para describir el dominio (2.02_0.7) y para generar sistemas más específicos (2_0.75). Los expertos también reportan un interés parcial en utilizar el modelo en su trabajo (2.48_0.91). Gracias a sus comentarios, el modelo fue mejorado y enriquecido con conceptos que faltaban, aunque no se pudo demonstrar su mejora en las dimensiones evaluadas, dada la composición diferente de personas en las tres rondas de evaluación. Desde el modelo, se generó una plataforma de desarrollo llamada \pHealth Patient Platform (pHPP)". La plataforma desarrollada incluye librerías, herramientas de programación y desarrollo, un tutorial y una aplicación de ejemplo. Se definieron cuatro módulos principales de la arquitectura: el Data Collection Engine, que permite abstraer las fuentes de datos como sensores o servicios externos, mapeando los datos a bases de datos u ontologías, y permitiendo interacción basada en eventos; el GUI Engine, que abstrae la interfaz de usuario en un modelo de interacción basado en mensajes; y el Rule Engine, que proporciona a los desarrolladores un medio simple para programar la lógica de la aplicación en forma de reglas \if-then". Después de que la plataforma pHPP fue utilizada durante 5 años en el proyecto HeartCycle, 5 desarrolladores fueron reunidos en un grupo de discusión para analizar y evaluar la plataforma. De estas evaluaciones se concluye que la plataforma fue diseñada para encajar las necesidades de los ingenieros que trabajan en la rama, permitiendo la separación de problemas entre las distintas especialidades, y simplificando algunas tareas de desarrollo como el manejo de datos y la interacción asíncrona. A pesar de ello, se encontraron algunos defectos a causa de la inmadurez de algunas tecnologías empleadas, y la ausencia de algunas herramientas específicas para el dominio como el procesado de datos o algunos protocolos de comunicación relacionados con la salud. Dentro del proyecto HeartCycle la plataforma fue utilizada para el desarrollo de la aplicación \Guided Exercise", un sistema TIC para la rehabilitación de pacientes que han sufrido un infarto del miocardio. El sistema fue testeado en un ensayo clínico randomizado en el cual a 55 pacientes se les dio el sistema para su uso por 21 semanas. De los resultados técnicos del ensayo se puede concluir que, a pesar de algunos errores menores prontamente corregidos durante el estudio, la plataforma es estable y fiable. Conclusiones La investigación llevada a cabo en esta Tesis y los resultados obtenidos proporcionan las respuestas a las tres preguntas de investigación que motivaron este trabajo: RQ1 Se ha desarrollado un modelo para representar el dominio de los sistemas personalizados de salud. La evaluación hecha por los expertos de la rama concluye que el modelo representa el dominio con precisión y con un balance apropiado entre abstracción y detalle. RQ2 Se ha desarrollado, con éxito, una plataforma de desarrollo basada en el modelo. RQ3 Se ha demostrado que la plataforma es capaz de ayudar a los desarrolladores en la creación de software pHealth complejos. Las ventajas de la plataforma han sido demostradas en el ámbito de un proyecto de gran escala, aunque el enfoque genérico adoptado indica que la plataforma podría ofrecer beneficios también en otros contextos. Los resultados de estas evaluaciones ofrecen indicios de que, ambos, el modelo y la plataforma serán buenos candidatos para poderse convertir en una referencia para futuros desarrollos de sistemas pHealth. ABSTRACT Background Europe is living in an unsustainable situation. The economic crisis has been reducing governments' economic resources since 2008 and threatening social and health systems, while the proportion of older people in the European population continues to increase so that it is foreseen that in 2050 there will be only two workers per retiree [54]. To this situation it should be added the rise, strongly related to age, of chronic diseases the burden of which has been estimated to be up to the 7% of a country's gross domestic product [51]. There is a need for a paradigm shift, the need for a new way of caring for people's health, shifting the focus from curing conditions that have arisen to a sustainable and effective approach with the emphasis on prevention. Some advocate the adoption of personalised health care (pHealth), a model where medical practices are tailored to the patient's unique life, from the detection of risk factors to the customization of treatments based on each individual's response [81]. Personalised health is often associated to the use of Information and Communications Technology (ICT), that, with its exponential development, offers interesting opportunities for improving healthcare. The shift towards pHealth is slowly taking place, both in research and in industry, but the change is not significant yet. Many barriers still exist related to economy, politics and culture, while others are purely technological, like the lack of interoperable information systems [199]. Though interoperability aspects are evolving, there is still the need of a reference design, especially tackling implementation and large scale deployment of pHealth systems. This thesis contributes to organizing the subject of ICT systems for personalised health into a reference model that allows for the creation of software development platforms to ease common development issues in the domain. Research questions RQ1 Is it possible to define a model, based on software engineering techniques, for representing the personalised health domain in an abstract and representative way? RQ2 Is it possible to build a development platform based on this model? RQ3 Does the development platform help developers create complex integrated pHealth systems? Methods As method for describing the model, the ISO/IEC/IEEE 42010 framework [25] is adopted for its generality and high level of abstraction. The model is specified in different parts: a conceptual model, which makes use of concept maps, for representing stakeholders, artefacts and shared information, and in scenarios and use cases for the representation of the functionalities of pHealth systems. The model was derived from literature analysis, including 7 industrial and scientific reports, 9 electronic standards, 10 conference proceedings papers, 37 journal papers, 25 websites and 5 books. Based on the reference model, requirements were drawn for building the development platform enriched with a set of requirements gathered in a survey run among 11 experienced engineers. For developing the platform, the continuous integration methodology [74] was adopted which allowed to perform automatic tests on a server and also to deploy packaged releases on a web site. As a validation methodology, a theory building framework for SW engineering was adopted from [181]. The framework, chosen as a guide to find evidence for justifying the research questions, imposed the creation of theories based on models and propositions to be validated within a scope. The validation of the model was conducted as an on-line survey in three validation rounds, encompassing a growing number of participants. The survey was submitted to 134 experts of the field and on some public channels like relevant mailing lists and social networks. Its objective was to assess the model's readability, its level of coverage of the domain and its potential usefulness in the design of actual, derived systems. The questionnaires included quantitative Likert scale questions and free text inputs for comments. The development platform was validated in two scopes. As a small-scale experiment, the platform was used in a 12 hours training session where 4 developers had to perform an exercise consisting in developing a set of typical pHealth use cases At the end of the session, a focus group was held to identify benefits and drawbacks of the platform. The second validation was held as a test-case study in a large scale research project called HeartCycle the aim of which was to develop a closed-loop disease management system for heart failure and coronary heart disease patients [160]. During this project three applications were developed by a team of programmers and designers. One of these applications was tested in a clinical trial with actual patients. At the end of the project, the team was interviewed in a focus group to assess the role the platform had within the project. Results For what regards the model that describes the pHealth domain, its conceptual part includes a description of the main roles and concerns of pHealth stakeholders, a model of the ICT artefacts that are commonly adopted and a model representing the typical data that need to be formalized among pHealth systems. The functional model includes a set of 18 scenarios, divided into assisted person's view, caregiver's view, developer's view, technology and services providers' view and authority's view, and a set of 52 Use Cases grouped in 6 categories: assisted person's activities, system reactions, caregiver's activities, user engagement, developer's activities and deployer's activities. For what concerns the validation of the model, a total of 65 people participated in the online survey providing their level of agreement in all the assessed dimensions and a total of 248 comments on how to improve and complete the model. Participants' background spanned from engineering and software development (70%) to medical specialities (15%), with declared interest in the fields of eHealth (24%), mHealth (16%), Ambient Assisted Living (21%), Personalized Medicine (5%), Personal Health Systems (15%), Medical Informatics (10%) and Biomedical Engineering (8%) with an average of 7.25_4.99 years of experience in these fields. From the analysis of the answers it is possible to observe that the contacted experts considered the model easily readable (average of 1.89_0.79 being 1 the most favourable scoring and 5 the worst), sufficiently abstract (1.99_0.88) and formal (2.13_0.77) for its purpose, with a sufficient coverage of the domain (2.26_0.95), useful for describing the domain (2.02_0.7) and for generating more specific systems (2_0.75) and they reported a partial interest in using the model in their job (2.48_0.91). Thanks to their comments, the model was improved and enriched with concepts that were missing at the beginning, nonetheless it was not possible to prove an improvement among the iterations, due to the diversity of the participants in the three rounds. From the model, a development platform for the pHealth domain was generated called pHealth Patient Platform (pHPP). The platform includes a set of libraries, programming and deployment tools, a tutorial and a sample application. The main four modules of the architecture are: the Data Collection Engine, which allows abstracting sources of information like sensors or external services, mapping data to databases and ontologies, and allowing event-based interaction and filtering, the GUI Engine, which abstracts the user interface in a message-like interaction model, the Workow Engine, which allows programming the application's user interaction ows with graphical workows, and the Rule Engine, which gives developers a simple means for programming the application's logic in the form of \if-then" rules. After the 5 years experience of HeartCycle, partially programmed with pHPP, 5 developers were joined in a focus group to discuss the advantages and drawbacks of the platform. The view that emerged from the training course and the focus group was that the platform is well-suited to the needs of the engineers working in the field, it allowed the separation of concerns among the different specialities and it simplified some common development tasks like data management and asynchronous interaction. Nevertheless, some deficiencies were pointed out in terms of a lack of maturity of some technological choices, and for the absence of some domain-specific tools, e.g. for data processing or for health-related communication protocols. Within HeartCycle, the platform was used to develop part of the Guided Exercise system, a composition of ICT tools for the physical rehabilitation of patients who suffered from myocardial infarction. The system developed using the platform was tested in a randomized controlled clinical trial, in which 55 patients used the system for 21 weeks. The technical results of this trial showed that the system was stable and reliable. Some minor bugs were detected, but these were promptly corrected using the platform. This shows that the platform, as well as facilitating the development task, can be successfully used to produce reliable software. Conclusions The research work carried out in developing this thesis provides responses to the three three research questions that were the motivation for the work. RQ1 A model was developed representing the domain of personalised health systems, and the assessment of experts in the field was that it represents the domain accurately, with an appropriate balance between abstraction and detail. RQ2 A development platform based on the model was successfully developed. RQ3 The platform has been shown to assist developers create complex pHealth software. This was demonstrated within the scope of one large-scale project, but the generic approach adopted provides indications that it would offer benefits more widely. The results of these evaluations provide indications that both the model and the platform are good candidates for being a reference for future pHealth developments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the needs and preferences of customers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper argues about the utility of advanced knowledge-based techniques to develop web-based applications that help consumers in finding products within marketplaces in e-commerce. In particular, we describe the idea of model-based approach to develop a shopping agent that dynamically configures a product according to the needs and preferences of customers. Finally, the paper summarizes the advantages provided by this approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Historically, teachers have always searched for a connection with their students to make education interesting and a vital experience. In the 19th century, pedagogue Johann Heinrich Pestalozzi taught children how to sum using wood blocks. His successors have followed his legacy and today they use a wide variety of media, including board games, in order to reach out to their students. These methods are denominated educational technologies, which are defined as the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources. With the advent of the information technologies, teachers have at their disposal new media with which they can increase the interest of their students. This technologic revolution is changing the present educational model. The objective of this dissertation is to develop an educational videogame in order to help students learn mathematics. To reach this goal, the videogame has been developed with the game engine Unity as the main tool. Additionally, agile software development methodologies as well as other software engineering techniques have also been used. The result is Riskmatica, an educational videogame based on geographical domination in which knowledge is the best weapon. The players must conquer enemy teritories answering correctly a mathecatical question. Moreover the videogame has the functionality required to configure a new game and input new questions. To conclude, this project has created an educational technology which greatly appeals to students and that can be used by the educators to improve their lessons in mathematics.---RESUMEN---A lo largo de la historia, los educadores siempre han buscado conectar con los alumnos para poder captar su interés y hacer que la educación se convierta en una experiencia vital. El pedagogo Johann Heinrich Pestalozzi conseguía esto en el siglo XIX, enseñando a niños a contar con bloques de madera. Sus sucesores han seguido su legado y hoy en día utilizan variedad de medios con los que motivar a sus alumnos, en algunos casos los juegos de mesa. Estos métodos son denominados tecnologías educativas, que se definen como los estudios y prácticas éticas que facilitan y mejoran la enseñanza, mediante la creación, el uso y el empleo de procesos y recursos tecnológicos. Con el advenimiento de las tecnologías de la información, los educadores tienen a su disposición un nuevo medio con el que llegar al alumnado. Esta revolución tecnológica está cambiando el modelo educativo actual. El objetivo de este proyecto es el de crear un videojuego educativo que ayude a los alumnos a estudiar matemáticas. Para lograrlo se ha utilizado el popular motor de videojuego Unity como herramienta principal. También se han empleado metodologías ágiles de desarrollo además de otras técnicas de ingeniería del software. El resultado es Riskmática, un videojuego educativo de dominación geográfica en el que el arma más eficaz es el conocimiento. Los jugadores deberán conquistar territorios a sus adversarios mediante la respuesta de preguntas de carácter matemático. Además el videojuego cuenta con la funcionalidad necesaria para configurar una partida e introducir nuevas preguntas. Como conlusión, este proyecto ha logrado crear una tecnología educativa muy atractiva para los alumnos con la que los profesores pueden mejorar la enseñanza de las matemáticas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Uncertainty can be defined as the difference between information that is represented in an executing system and the information that is both measurable and available about the system at a certain point in its life-time. A software system can be exposed to multiple sources of uncertainty produced by, for example, ambiguous requirements and unpredictable execution environments. A runtime model is a dynamic knowledge base that abstracts useful information about the system, its operational context and the extent to which the system meets its stakeholders' needs. A software system can successfully operate in multiple dynamic contexts by using runtime models that augment information available at design-time with information monitored at runtime. This chapter explores the role of runtime models as a means to cope with uncertainty. To this end, we introduce a well-suited terminology about models, runtime models and uncertainty and present a state-of-the-art summary on model-based techniques for addressing uncertainty both at development- and runtime. Using a case study about robot systems we discuss how current techniques and the MAPE-K loop can be used together to tackle uncertainty. Furthermore, we propose possible extensions of the MAPE-K loop architecture with runtime models to further handle uncertainty at runtime. The chapter concludes by identifying key challenges, and enabling technologies for using runtime models to address uncertainty, and also identifies closely related research communities that can foster ideas for resolving the challenges raised. © 2014 Springer International Publishing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a study of the influence of dispersion induced phase noise for CO-OFDM systems using FFT multiplexing/IFFT demultiplexing techniques (software based). The software based system provides a method for a rigorous evaluation of the phase noise variance caused by Common Phase Error (CPE) and Inter-Carrier Interference (ICI) including - for the first time to our knowledge - in explicit form the effect of equalization enhanced phase noise (EEPN). This, in turns, leads to an analytic BER specification. Numerical results focus on a CO-OFDM system with 10-25 GS/s QPSK channel modulation. A worst case constellation configuration is identified for the phase noise influence and the resulting BER is compared to the BER of a conventional single channel QPSK system with the same capacity as the CO-OFDM implementation. Results are evaluated as a function of transmission distance. For both types of systems, the phase noise variance increases significantly with increasing transmission distance. For a total capacity of 400 (1000) Gbit/s, the transmission distance to have the BER < 10-2 for the worst case CO-OFDM design is less than 800 and 460 km, respectively, whereas for a single channel QPSK system it is less than 1400 and 560 km.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Heuristics, simulation, artificial intelligence techniques and combinations thereof have all been employed in the attempt to make computer systems adaptive, context-aware, reconfigurable and self-managing. This paper complements such efforts by exploring the possibility to achieve runtime adaptiveness using mathematically-based techniques from the area of formal methods. It is argued that formal methods @ runtime represents a feasible approach, and promising preliminary results are summarised to support this viewpoint. The survey of existing approaches to employing formal methods at runtime is accompanied by a discussion of their challenges and of the future research required to overcome them. © 2011 Springer-Verlag.