787 resultados para Cross-platform software development
Resumo:
The evolution of communications networks to Next Generation Networks (NGN) has encouraged the development of new services. Nowadays, several technologies are being integrated into telecommunications services in order to provide new functionalities, resulting in what are known as converged services. The objective is to adapt the behavior of the services to the necessities of different users, generating customized services. Some of the main technologies involved in their development are those related to the Web. But due to this type of services implies the combination of different technologies, their development is a very complex process that has to be improved to reduce the time and cost required, with the aim of promoting the success of such services. This paper proposes to apply software reuse through the utilization of a component library and presents one focused on ECharts for SIP Servlets (E4SS). It is a framework, based on the SIP Servlet specification, which uses finite state machines for the definition of converged communications services. Also, to promote the use of the library, a methodology is proposed in order to facilitate the integration between the library operations and the software development cycle.
Resumo:
This dissertation discusses how different practitioners define project success and success factors for software projects and products. The motivation for this work is to identify the way software practitioners’ value and define project success. This can have implications for both practitioner motivation and software development productivity. Accordingly, in this work, we are interested in the various perceptions of the term “success” for different software practitioners and researchers. To get this information we performed a systematic mapping of the recent year’s software development literature trying to identify stakeholders’ perceptions about the success of a project and also possible differences among the views of the various stakeholders of a project. Some common terms related to project success (success project; software project success factors) were considered in formulating the search strings. The results were limited to twenty-two selected peer-reviewed conferences, papers/journal articles, published between 2003 and 2012.
Resumo:
Los sistemas de recomendación son potentes herramientas de filtrado de información que permiten a usuarios solicitar sugerencias sobre ítems que cubran sus necesidades. Tradicionalmente estas recomendaciones han estado basadas en opiniones de los mismos, así como en datos obtenidos de su consumo histórico o comportamiento en el propio sistema. Sin embargo, debido a la gran penetración y uso de los dispositivos móviles en nuestra sociedad, han surgido nuevas oportunidades en el campo de los sistemas de recomendación móviles gracias a la información contextual que se puede obtener sobre la localización o actividad de los usuarios. Debido a este estilo de vida en el que todo tiende a la movilidad y donde los usuarios están plenamente interconectados, la información contextual no sólo es física, sino que también adquiere una dimensión social. Todo esto ha dado lugar a una nueva área de investigación relacionada con los Sistemas de Recomendación Basados en Contexto (CARS) móviles donde se busca incrementar el nivel de personalización de las recomendaciones al usar dicha información. Por otro lado, este nuevo escenario en el que los usuarios llevan en todo momento un terminal móvil consigo abre la puerta a nuevas formas de recomendar. Sustituir el tradicional patrón de uso basado en petición-respuesta para evolucionar hacia un sistema proactivo es ahora posible. Estos sistemas deben identificar el momento más adecuado para generar una recomendación sin una petición explícita del usuario, siendo para ello necesario analizar su contexto. Esta tesis doctoral propone un conjunto de modelos, algoritmos y métodos orientados a incorporar proactividad en CARS móviles, a la vez que se estudia el impacto que este tipo de recomendaciones tienen en la experiencia de usuario con el fin de extraer importantes conclusiones sobre "qué", "cuándo" y "cómo" se debe notificar proactivamente. Con este propósito, se comienza planteando una arquitectura general para construir CARS móviles en escenarios sociales. Adicionalmente, se propone una nueva forma de representar el proceso de recomendación a través de una interfaz REST, lo que permite crear una arquitectura independiente de dispositivo y plataforma. Los detalles de su implementación tras su puesta en marcha en el entorno bancario español permiten asimismo validar el sistema construido. Tras esto se presenta un novedoso modelo para incorporar proactividad en CARS móviles. Éste muestra las ideas principales que permiten analizar una situación para decidir cuándo es apropiada una recomendación proactiva. Para ello se presentan algoritmos que establecen relaciones entre lo propicia que es una situación y cómo esto influye en los elementos a recomendar. Asimismo, para demostrar la viabilidad de este modelo se describe su aplicación a un escenario de recomendación para herramientas de creación de contenidos educativos. Siguiendo el modelo anterior, se presenta el diseño e implementación de nuevos interfaces móviles de usuario para recomendaciones proactivas, así como los resultados de su evaluación entre usuarios, lo que aportó importantes conclusiones para identificar cuáles son los factores más relevantes a considerar en el diseño de sistemas proactivos. A raíz de los resultados anteriores, el último punto de esta tesis presenta una metodología para calcular cuán apropiada es una situación de cara a recomendar de manera proactiva siguiendo el modelo propuesto. Como conclusión, se describe la validación llevada a cabo tras la aplicación de la arquitectura, modelo de recomendación y métodos descritos en este trabajo en una red social de aprendizaje europea. Finalmente, esta tesis discute las conclusiones obtenidas a lo largo de la extensa investigación llevada a cabo, y que ha propiciado la consecución de una buena base teórica y práctica para la creación de sistemas de recomendación móviles proactivos basados en información contextual. ABSTRACT Recommender systems are powerful information filtering tools which offer users personalized suggestions about items whose aim is to satisfy their needs. Traditionally the information used to make recommendations has been based on users’ ratings or data on the item’s consumption history and transactions carried out in the system. However, due to the remarkable growth in mobile devices in our society, new opportunities have arisen to improve these systems by implementing them in ubiquitous environments which provide rich context-awareness information on their location or current activity. Because of this current all-mobile lifestyle, users are socially connected permanently, which allows their context to be enhanced not only with physical information, but also with a social dimension. As a result of these novel contextual data sources, the advent of mobile Context-Aware Recommender Systems (CARS) as a research area has appeared to improve the level of personalization in recommendation. On the other hand, this new scenario in which users have their mobile devices with them all the time offers the possibility of looking into new ways of making recommendations. Evolving the traditional user request-response pattern to a proactive approach is now possible as a result of this rich contextual scenario. Thus, the key idea is that recommendations are made to the user when the current situation is appropriate, attending to the available contextual information without an explicit user request being necessary. This dissertation proposes a set of models, algorithms and methods to incorporate proactivity into mobile CARS, while the impact of proactivity is studied in terms of user experience to extract significant outcomes as to "what", "when" and "how" proactive recommendations have to be notified to users. To this end, the development of this dissertation starts from the proposal of a general architecture for building mobile CARS in scenarios with rich social data along with a new way of managing a recommendation process through a REST interface to make this architecture multi-device and cross-platform compatible. Details as regards its implementation and evaluation in a Spanish banking scenario are provided to validate its usefulness and user acceptance. After that, a novel model is presented for proactivity in mobile CARS which shows the key ideas related to decide when a situation warrants a proactive recommendation by establishing algorithms that represent the relationship between the appropriateness of a situation and the suitability of the candidate items to be recommended. A validation of these ideas in the area of e-learning authoring tools is also presented. Following the previous model, this dissertation presents the design and implementation of new mobile user interfaces for proactive notifications. The results of an evaluation among users testing these novel interfaces is also shown to study the impact of proactivity in the user experience of mobile CARS, while significant factors associated to proactivity are also identified. The last stage of this dissertation merges the previous outcomes to design a new methodology to calculate the appropriateness of a situation so as to incorporate proactivity into mobile CARS. Additionally, this work provides details about its validation in a European e-learning social network in which the whole architecture and proactive recommendation model together with its methods have been implemented. Finally, this dissertation opens up a discussion about the conclusions obtained throughout this research, resulting in useful information from the different design and implementation stages of proactive mobile CARS.
Resumo:
El presente proyecto fin de carrera, realizado por el ingeniero técnico en telecomunicaciones Pedro M. Matamala Lucas, es la fase final de desarrollo de un proyecto de mayor magnitud correspondiente al software de vídeo forense SAVID. El propósito del proyecto en su totalidad es la creación de una herramienta informática capacitada para realizar el análisis de ficheros de vídeo, codificados y comprimidos por el sistema DV –Digital Video-. El objetivo del análisis, es aportar información acerca de si la cinta magnética presenta indicios de haber sido manipulada con una edición posterior a su grabación original, además, de mostrar al usuario otros datos de interés como las especificaciones técnicas de la señal de vídeo y audio. Por lo tanto, se facilitará al usuario, analista de vídeo forense, información que le ayude a valorar la originalidad del contenido del soporte que es sujeto del análisis. El objetivo específico de esta fase final, es la creación de la interfaz de usuario del software, que informa tanto del código binario de los sectores significativos, como de su interpretación tras el análisis. También permitirá al usuario el reporte de los resultados, además de otras funcionalidades que le permitan la navegación por los sectores del código que han sido modificados como efecto colateral de la edición de la cinta magnética original. Otro objetivo importante del proyecto ha sido la investigación de metodologías y técnicas de desarrollo de software para su posterior implementación, buscando con esto, una mayor eficiencia en la gestión del tiempo y una mayor calidad de software con el fin de garantizar su evolución y sostenibilidad en el futuro. Se ha hecho hincapié en las metodologías ágiles que han ido ganando relevancia en el sector de las tecnologías de la información en las últimas décadas, sustituyendo a metodologías clásicas como el desarrollo en cascada. Su flexibilidad durante el ciclo de vida del software, permite obtener mejores resultados cuando las especificaciones no están del todo definidas, ajustándose de este modo a las condiciones del proyecto. Resumiendo las especificaciones técnicas del software, C++ es el lenguaje de programación orientado a objetos con el que se ha desarrollado, utilizándose la tecnología MFC -Microsoft Foundation Classes- para la implementación. Es un proyecto MFC de tipo cuadro de dialogo,creado, compilado y publicado, con la herramienta de desarrollo integrado Microsoft Visual Studio 2010. La arquitectura con la que se ha estructurado es la arquetípica de tres capas, compuesta por la interfaz de usuario, capa de negocio y capa de acceso a datos. Se ha visto necesario configurar el proyecto con compatibilidad con CLR –Common Languages Runtime- para poder implementar la funcionalidad de creación de reportes. Acompañando a la aplicación informática, se presenta la memoria del proyecto y sus anexos correspondientes a los documentos EDRF –Especificaciones Detalladas de Requisitos funcionales-, EIU –Especificaciones de Interfaz de Usuario , DT -Diseño Técnico- y Guía de Usuario. SUMMARY. This dissertation, carried out by the telecommunications engineer Pedro M. Matamala Lucas, is in its final stage and is part of a larger project for the software of forensic video called SAVID. The purpose of the entire project is the creation of a software tool capable of analyzing video files that are coded and compressed by the DV -Digital Video- System. The objective of the analysis is to provide information on whether the magnetic tape shows signs of having been tampered with after the editing of the original recording, and also to show the user other relevant data and technical specifications of the video signal and audio. Therefore the user, forensic video analyst, will have information to help assess the originality of the content of the media that is subject to analysis. The specific objective of this final phase is the creation of the user interface of the software that provides information about the binary code of the significant sectors and also its interpretation after analysis. It will also allow the user to report the results, and other features that will allow browsing through the sections of the code that have been modified as a secondary effect of the original magnetic tape being tampered. Another important objective of the project is the investigation of methodologies and software development techniques to be used in deployment, with the aim of greater efficiency in time management and enhanced software quality in order to ensure its development and maintenance in the future. Agile methodologies, which have become important in the field of information technology in recent decades, have been used during the execution of the project, replacing classical methodologies such as Waterfall Development. The flexibility, as the result of using by agile methodologies, during the software life cycle, produces better results when the specifications are not fully defined, thus conforming to the initial conditions of the project. Summarizing the software technical specifications, C + + the programming language – which is object oriented and has been developed using technology MFC- Microsoft Foundation Classes for implementation. It is a project type dialog box, created, compiled and released with the integrated development tool Microsoft Visual Studio 2010. The architecture is structured in three layers: the user interface, business layer and data access layer. It has been necessary to configure the project with the support CLR -Common Languages Runtime – in order to implement the reporting functionality. The software application is submitted with the project report and its annexes to the following documents: Functional Requirements Specifications - Detailed User Interface Specifications, Technical Design and User Guide.
Resumo:
Actualmente existen multitud de aplicaciones creadas para la gestión de proyectos software; cada una de ellas pretende dar solución y facilitar las tareas propias de los gestores y los desarrolladores pertenecientes a los equipos de desarrollo. Los equipos de desarrollo software suelen estar integrados por gran variedad de recursos, tanto humanos como materiales. Cada uno desempeña una función concreta en el proyecto, pudiendo no tener una dedicación plena al proyecto. Por eso, es necesario que dichos recursos sean compartidos entre la cartera proyectos existentes. Para resolver este planteamiento en las aplicaciones de gestión de proyectos, ha sido requisito fundamental que se puedan gestionar varios proyectos de forma simultánea (gestión multiproyecto), pudiendo repartir la dedicación de los recursos entre los proyectos existentes en la cartera. En la actualidad, existe un gran número de metodologías de gestión de proyectos, por lo que, en parte, el éxito del proyecto radica en la elección de la más adecuada. Entre todas las metodologías existentes, este estudio se ha centrado en las cada vez más utilizadas metodologías de gestión de proyectos ágiles; se describe en qué consisten, qué beneficios aportan frente a las metodologías clásicas y cuáles son las más utilizadas por sus ya contrastados beneficios y el valor que aportan a la gestión de proyectos. Por lo descrito anteriormente, otro requisito fundamental a la hora de valorar las aplicaciones de gestión de proyectos ha sido la capacidad de soportar y aplicar metodologías ágiles de gestión de proyectos. En este estudio también se ha tenido en cuenta el tipo de aplicación atendiendo a su instalación y acceso, y se ha realizado la diferenciación entre aplicaciones web- las cuales precisan ser instaladas en un servidor web y son accesibles desde cualquier dispositivo con navegador -, y aplicaciones de escritorio - las cuales precisan estar instaladas en un equipo de forma local y sólo pueden ser accedidas a ellas desde dicho equipo. En este estudio se han evaluado varias aplicaciones, intentando analizar el cumplimiento de las características comentadas anteriormente, dando como resultado tres aplicaciones seleccionadas siendo éstas las que pueden aportar más valor a la hora de gestionar una cartera de proyectos. ABSTRACT. At present, there are many applications aimed at managing software projects. Every application intends to solve and facilitate tasks to managers and developers belonging to the development teams. Software development teams are usually made up of many different human and material resources, each of them developing a specific task in the project and sometimes without a full dedication to the project. Therefore, these resources have to be shared within the existing project portfolio. To meet this need in project management applications, the main requirement is to be able to manage several projects simultaneously (multi-project management), thus allowing resources to be shared within the existing project portfolio. At present, there are a large number of project management methodologies and the success of the project lies in choosing the most appropriate one. Among all the existing methodologies, this study has focused on the increasingly used agile project management methodologies. The study describes the way they work, their added value in comparison traditional methodologies, and which ones are more often used due to their already verified benefits and value in managing projects. Taking into account the above-mention characteristics, another key requirement when assessing the project management applications has been their capacity to support and implement project management agile methodologies. This study has also taken into account the type of application according to its installation and access. A difference is established between web applications – which require to be installed in a web server and are accessible from any device with a web browser – and desktop applications, which must be installed in the equipment to be used and are only accessible from this equipment. The study has assessed several applications by analyzing the compliance with the above-mentioned characteristics and has chosen three applications that provide the management of the project portfolio with an added value.
Resumo:
Nowadays, Software Product Line (SPL) engineering [1] has been widely-adopted in software development due to the significant improvements that has provided, such as reducing cost and time-to-market and providing flexibility to respond to planned changes [2]. SPL takes advantage of common features among the products of a family through the systematic reuse of the core-assets and the effective management of variabilities across the products. SPL features are realized at the architectural level in product-line architecture (PLA) models. Therefore, suitable modeling and specification techniques are required to model variability. In fact, architectural variability modeling has become a challenge for SPLE due to the fact that PLA modeling requires not only modeling variability at the level of the external architecture configuration (see [3,4] literature reviews), but also at the level of internal specification of components [5]. In addition, PLA modeling requires preserving the traceability between features and PLAs. Finally, it is important to take into account that PLA modeling should guide architects in modeling the PLA core assets and variability, and in deriving the customized products. To deal with these needs, we present in this demonstration the FPLA Modeling Framework.
Resumo:
Software needs to be accessible for persons with disabilities and there are several guidelines to assist developers in building more accessible software. Regulation activities are beginning to make the accessibility of software a mandatory requirement in some countries. One such activity is the European Mandate M 376, which will result in a European standard (EN 301 549) defining functional accessibility requirements for information and communication technology products and services. This paper provides an overview of Mandate M 376 and EN 301 549, and describes the requirements for software accessibility defined in EN 301 549, according to a feature-based approach
Resumo:
The Software Engineering (SE) community has historically focused on working with models to represent functionality and persistence, pushing interaction modelling into the background, which has been covered by the Human Computer Interaction (HCI) community. Recently, adequately modelling interaction, and specifically usability, is being considered as a key factor for success in user acceptance, making the integration of the SE and HCI communities more necessary. If we focus on the Model-Driven Development (MDD) paradigm, we notice that there is a lack of proposals to deal with usability features from the very first steps of software development process. In general, usability features are manually implemented once the code has been generated from models. This contradicts the MDD paradigm, which claims that all the analysts? effort must be focused on building models, and the code generation is relegated to model to code transformations. Moreover, usability features related to functionality may involve important changes in the system architecture if they are not considered from the early steps. We state that these usability features related to functionality can be represented abstractly in a conceptual model, and their implementation can be carried out automatically.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
La Ingeniería del Software (IS) Empírica adopta el método científico a la IS para facilitar la generación de conocimiento. Una de las técnicas empleadas, es la realización de experimentos. Para que el conocimiento obtenido experimentalmente adquiera el nivel de madurez necesario para su posterior uso, es necesario que los experimentos sean replicados. La existencia de múltiples replicaciones de un mismo experimento conlleva la existencia de numerosas versiones de los distintos productos generados durante la realización de cada replicación. Actualmente existe un gran descontrol sobre estos productos, ya que la administración se realiza de manera informal. Esto causa problemas a la hora de planificar nuevas replicaciones, o intentar obtener información sobre las replicaciones ya realizadas. Para conocer con detalle la dimensión del problema a resolver, se estudia el estado actual de la gestión de materiales experimentales y su uso en replicaciones, así como de las herramientas de gestión de materiales experimentales. El estudio concluye que ninguno de los enfoques estudiados proporciona una solución al problema planteado. Este trabajo persigue como objetivo mejorar la administración de los materiales experimentales y replicaciones de experimentos en IS para dar soporte a la replicación de experimentos. Para satisfacer este objetivo, se propone la adopción en experimentación de los paradigmas de Gestión de Configuración del Software (GCS) y Línea de Producto Software (LPS). Para desarrollar la propuesta se decide utilizar el método de investigación acción (en inglés action research). Para adoptar la GCS a experimentación, se comienza realizando un estudio del proceso experimental como transformación de productos; a continuación, se realiza una adopción de conceptos fundamentada en los procesos del desarrollo software y de experimentación; finalmente, se desarrollan un conjunto de instrumentos, que se incorporan a un Plan de Gestión de Configuración de Experimentos (PGCE). Para adoptar la LPS a experimentación, se comienza realizando un estudio de los conceptos, actividades y fases que fundamentan la LPS; a continuación, se realiza una adopción de los conceptos; finalmente, se desarrollan o adoptan las técnicas, simbología y modelos para dar soporte a las fases de la Línea de Producto para Experimentación (LPE). La propuesta se valida mediante la evaluación de su: viabilidad, flexibilidad, usabilidad y satisfacción. La viabilidad y flexibilidad se evalúan mediante la instanciación del PGCE y de la LPE en experimentos concretos en IS. La usabilidad se evalúa mediante el uso de la propuesta para la generación de las instancias del PGCE y de LPE. La satisfacción evalúa la información sobre el experimento que contiene el PGCE y la LPE. Los resultados de la validación de la propuesta muestran mejores resultados en los aspectos de usabilidad y satisfacción a los experimentadores. ABSTRACT Empirical software engineering adapts the scientific method to software engineering (SE) in order to facilitate knowledge generation. Experimentation is one of the techniques used. For the knowledge generated experimentally to acquire the level of maturity necessary for later use, the experiments have to be replicated. As the same experiment is replicated more than once, there are numerous versions of all the products generated during a replication. These products are generally administered informally without control. This is troublesome when it comes to planning new replications or trying to gather information on replications conducted in the past. In order to grasp the size of the problem to be solved, this research examines the current state of the art of the management and use of experimental materials in replications, as well as the tools managing experimental materials. The study concludes that none of the analysed approaches provides a solution to the stated problem. The aim of this research is to improve the administration of SE experimental materials and experimental replications in support of experiment replication. To do this, we propose the adaptation of software configuration management (SCM) and software product line (SPL) paradigms to experimentation. The action research method was selected in order to develop this proposal. The first step in the adaptation of the SCM to experimentation was to analyse the experimental process from the viewpoint of the transformation of products. The concepts were then adapted based on software development and experimentation processes. Finally, a set of instruments were developed and added to an experiment configuration management plan (ECMP). The first step in the adaptation of the SPL to experimentation is to analyse the concepts, activities and phases underlying the SPL. The concepts are then adapted. Finally, techniques, symbols and models are developed or adapted in support of the experimentation product line (EPL) phases. The proposal is validated by evaluating its feasibility, flexibility, usability and satisfaction. Feasibility and flexibility are evaluated by instantiating the ECMP and the EPL in specific SE experiments. Usability is evaluated by using the proposal to generate the instances of the ECMP and EPL. The results of the validation of the proposal show that the proposal performs better with respect to usability issues and experimenter satisfaction.
Resumo:
This document is the result of a process of web development to create a tool that will allow to Cracow University of Technology consult, create and manage timetables. The technologies chosen for this purpose are Apache Tomcat Server, My SQL Community Server, JDBC driver, Java Servlets and JSPs for the server side. The client part counts on Javascript, jQuery, AJAX and CSS technologies to perform the dynamism. The document will justify the choice of these technologies and will explain some development tools that help in the integration and development of all this elements: specifically, NetBeans IDE and MySQL workbench have been used as helpful tools. After explaining all the elements involved in the development of the web application, the architecture and the code developed are explained through UML diagrams. Some implementation details related to security are also deeper explained through sequence diagrams. As the source code of the application is provided, an installation manual has been developed to run the project. In addition, as the platform is intended to be a beta that will be grown, some unimplemented ideas for future development are also exposed. Finally, some annexes with important files and scripts related to the initiation of the platform are attached. This project started through an existing tool that needed to be expanded. The main purpose of the project along its development has focused on setting the roots for a whole new platform that will replace the existing one. For this goal, it has been needed to make a deep inspection on the existing web technologies: a web server and a SQL database had to be chosen. Although the alternatives were a lot, Java technology for the server was finally selected because of the big community backwards, the easiness of modelling the language through UML diagrams and the fact of being free license software. Apache Tomcat is the open source server that can use Java Servlet and JSP technology. Related to the SQL database, MySQL Community Server is the most popular open-source SQL Server, with a big community after and quite a lot of tools to manage the server. JDBC is the driver needed to put in contact Java and MySQL. Once we chose the technologies that would be part of the platform, the development process started. After a detailed explanation of the development environment installation, we used UML use case diagrams to set the main tasks of the platform; UML class diagrams served to establish the existing relations between the classes generated; the architecture of the platform was represented through UML deployment diagrams; and Enhanced entity–relationship (EER) model were used to define the tables of the database and their relationships. Apart from the previous diagrams, some implementation issues were explained to make a better understanding of the developed code - UML sequence diagrams helped to explain this. Once the whole platform was properly defined and developed, the performance of the application has been shown: it has been proved that with the current state of the code, the platform covers the use cases that were set as the main target. Nevertheless, some requisites needed for the proper working of the platform have been specified. As the project is aimed to be grown, some ideas that could not be added to this beta have been explained in order not to be missed for future development. Finally, some annexes containing important configuration issues for the platform have been added after proper explanation, as well as an installation guide that will let a new developer get the project ready. In addition to this document some other files related to the project are provided: - Javadoc. The Javadoc containing the information of every Java class created is necessary for a better understanding of the source code. - database_model.mwb. This file contains the model of the database for MySQL Workbench. This model allows, among other things, generate the MySQL script for the creation of the tables. - ScheduleManager.war. The WAR file that will allow loading the developed application into Tomcat Server without using NetBeans. - ScheduleManager.zip. The source code exported from NetBeans project containing all Java packages, JSPs, Javascript files and CSS files that are part of the platform. - config.properties. The configuration file to properly get the names and credentials to use the database, also explained in Annex II. Example of config.properties file. - db_init_script.sql. The SQL query to initiate the database explained in Annex III. SQL statements for MySQL initialization. RESUMEN. Este proyecto tiene como punto de partida la necesidad de evolución de una herramienta web existente. El propósito principal del proyecto durante su desarrollo se ha centrado en establecer las bases de una completamente nueva plataforma que reemplazará a la existente. Para lograr esto, ha sido necesario realizar una profunda inspección en las tecnologías web existentes: un servidor web y una base de datos SQL debían ser elegidos. Aunque existen muchas alternativas, la tecnología Java ha resultado ser elegida debido a la gran comunidad de desarrolladores que tiene detrás, además de la facilidad que proporciona este lenguaje a la hora de modelarlo usando diagramas UML. Tampoco hay que olvidar que es una tecnología de uso libre de licencia. Apache Tomcat es el servidor de código libre que permite emplear Java Servlets y JSPs para hacer uso de la tecnología de Java. Respecto a la base de datos SQL, el servidor más popular de código libre es MySQL, y cuenta también con una gran comunidad detrás y buenas herramientas de modelado, creación y gestión de la bases de datos. JDBC es el driver que va a permitir comunicar las aplicaciones Java con MySQL. Tras elegir las tecnologías que formarían parte de esta nueva plataforma, el proceso de desarrollo tiene comienzo. Tras una extensa explicación de la instalación del entorno de desarrollo, se han usado diagramas de caso de UML para establecer cuáles son los objetivos principales de la plataforma; los diagramas de clases nos permiten realizar una organización del código java desarrollado de modo que sean fácilmente entendibles las relaciones entre las diferentes clases. La arquitectura de la plataforma queda definida a través de diagramas de despliegue. Por último, diagramas EER van a definir las relaciones entre las tablas creadas en la base de datos. Aparte de estos diagramas, algunos detalles de implementación se van a justificar para tener una mejor comprensión del código desarrollado. Diagramas de secuencia ayudarán en estas explicaciones. Una vez que toda la plataforma haya quedad debidamente definida y desarrollada, se va a realizar una demostración de la misma: se demostrará cómo los objetivos generales han sido alcanzados con el desarrollo actual del proyecto. No obstante, algunos requisitos han sido aclarados para que la plataforma trabaje adecuadamente. Como la intención del proyecto es crecer (no es una versión final), algunas ideas que se han podido llevar acabo han quedado descritas de manera que no se pierdan. Por último, algunos anexos que contienen información importante acerca de la plataforma se han añadido tras la correspondiente explicación de su utilidad, así como una guía de instalación que va a permitir a un nuevo desarrollador tener el proyecto preparado. Junto a este documento, ficheros conteniendo el proyecto desarrollado quedan adjuntos. Estos ficheros son: - Documentación Javadoc. Contiene la información de las clases Java que han sido creadas. - database_model.mwb. Este fichero contiene el modelo de la base de datos para MySQL Workbench. Esto permite, entre otras cosas, generar el script de iniciación de la base de datos para la creación de las tablas. - ScheduleManager.war. El fichero WAR que permite desplegar la plataforma en un servidor Apache Tomcat. - ScheduleManager.zip. El código fuente exportado directamente del proyecto de Netbeans. Contiene todos los paquetes de Java generados, ficheros JSPs, Javascript y CSS que forman parte de la plataforma. - config.properties. Ejemplo del fichero de configuración que permite obtener los nombres de la base de datos - db_init_script.sql. Las consultas SQL necesarias para la creación de la base de datos.
Resumo:
To our knowledge, no current software development methodology explicitly describes how to transit from the analysis model to the software architecture of the application. This paper presents a method to derive the software architecture of a system from its analysis model. To do this, we are going to use MDA. Both the analysis model and the architectural model are PIMs described with UML 2. The model type mapping designed consists of several rules (expressed using OCL and natural language) that, when applied to the analysis artifacts, generate the software architecture of the application. Specifically the rules act on elements of the UML 2 metamodel (metamodel mapping). We have developed a tool (using Smalltalk) that permits the automatic application of these rules to an analysis model defined in RoseTM to generate the application architecture expressed in the architectural style C2.
Resumo:
Este proyecto surge por la problemática ocasionada por elevadas cantidades de ruido ambiental producido por aviones en sus operaciones cotidianas como despegue, aterrizaje o estacionamiento, que afecta a zonas pobladas cercanas a recintos aeroportuarios. Una solución para medir y evaluar los niveles producidos por el ruido aeronáutico son los sistemas de monitorado de ruido. Gracias a ellos se puede tener un control acústico y mejorar la contaminación ambiental en las poblaciones que limitan con los aeropuertos. El objetivo principal será la elaboración de un prototipo de sistema de monitorado de ruido capaz de medir el mismo en tiempo real, así como detectar y evaluar eventos sonoros provocados por aviones. Para ello se cuenta con un material específico: ordenador portátil, tarjeta de sonido externa de dos canales, dos micrófonos y un software de medida diseñado y desarrollado por el autor. Este será el centro de control del sistema. Para su programación se utilizará la plataforma y entorno de desarrollo LabVIEW. La realización de esta memoria se estructurará en tres partes. La primera parte está dedicada al estado del arte, en la que se explicarán algunos de los conceptos teóricos que serán utilizados para la elaboración del proyecto. En la segunda parte se explica la metodología seguida para la realización del sistema de monitorado. En primer lugar se describe el equipo usado, a continuación se expone como se realizó el software de medida así como su arquitectura general y por último se describe la interfaz al usuario. La última parte presenta los experimentos realizados que demuestran el correcto funcionamiento del sistema. ABSTRACT. This project addresses for the problematics caused by high quantities of environmental noise produced by planes in his daily operations as takeoff, landing or parking produced in populated areas nearly to airport enclosures. A solution to measure and to evaluate the levels produced by the aeronautical noise are aircraft noise monitoring systems. Thanks to these systems it is possible to have an acoustic control and improve the acoustic pollution in the populations who border on the airports. The main objective of this project is the production of a noise monitoring systems prototype capable of measuring real time noise, beside detecting and to evaluate sonorous events produced by planes. The specific material used is portable computer,sound external card of two channels, two microphones and a software of measure designed and developed by the author. This one will be the control center of the system. For his programming is used the platform of development LabVIEW. This memory is structured in three parts. The first part is dedicated to the condition of the art, in that will be explained some of the theoretical concepts that will be used for the production of the project. The second phase is to explain the methodology followed for the development of the noise monitoring systems. First a description of the used equipment, the next step, it is exposed how was realized the software of measure and his general architecture and finally is described the software user interface. The last part presents the realized experiments that demonstrate the correct use of the system.
Resumo:
El trabajo se enmarca dentro de los proyecto INTEGRATE y EURECA, cuyo objetivo es el desarrollo de una capa de interoperabilidad semántica que permita la integración de datos e investigación clínica, proporcionando una plataforma común que pueda ser integrada en diferentes instituciones clínicas y que facilite el intercambio de información entre las mismas. De esta manera se promueve la mejora de la práctica clínica a través de la cooperación entre instituciones de investigación con objetivos comunes. En los proyectos se hace uso de estándares y vocabularios clínicos ya existentes, como pueden ser HL7 o SNOMED, adaptándolos a las necesidades particulares de los datos con los que se trabaja en INTEGRATE y EURECA. Los datos clínicos se representan de manera que cada concepto utilizado sea único, evitando ambigüedades y apoyando la idea de plataforma común. El alumno ha formado parte de un equipo de trabajo perteneciente al Grupo de Informática de la UPM, que a su vez trabaja como uno de los socios de los proyectos europeos nombrados anteriormente. La herramienta desarrollada, tiene como objetivo realizar tareas de homogenización de la información almacenada en las bases de datos de los proyectos haciendo uso de los mecanismos de normalización proporcionados por el vocabulario médico SNOMED-CT. Las bases de datos normalizadas serán las utilizadas para llevar a cabo consultas por medio de servicios proporcionados en la capa de interoperabilidad, ya que contendrán información más precisa y completa que las bases de datos sin normalizar. El trabajo ha sido realizado entre el día 12 de Septiembre del año 2014, donde comienza la etapa de formación y recopilación de información, y el día 5 de Enero del año 2015, en el cuál se termina la redacción de la memoria. El ciclo de vida utilizado ha sido el de desarrollo en cascada, en el que las tareas no comienzan hasta que la etapa inmediatamente anterior haya sido finalizada y validada. Sin embargo, no todas las tareas han seguido este modelo, ya que la realización de la memoria del trabajo se ha llevado a cabo de manera paralela con el resto de tareas. El número total de horas dedicadas al Trabajo de Fin de Grado es 324. Las tareas realizadas y el tiempo de dedicación de cada una de ellas se detallan a continuación: Formación. Etapa de recopilación de información necesaria para implementar la herramienta y estudio de la misma [30 horas. Especificación de requisitos. Se documentan los diferentes requisitos que ha de cumplir la herramienta [20 horas]. Diseño. En esta etapa se toman las decisiones de diseño de la herramienta [35 horas]. Implementación. Desarrollo del código de la herramienta [80 horas]. Pruebas. Etapa de validación de la herramienta, tanto de manera independiente como integrada en los proyectos INTEGRATE y EURECA [70 horas]. Depuración. Corrección de errores e introducción de mejoras de la herramienta [45 horas]. Realización de la memoria. Redacción de la memoria final del trabajo [44 horas].---ABSTRACT---This project belongs to the semantic interoperability layer developed in the European projects INTEGRATE and EURECA, which aims to provide a platform to promote interchange of medical information from clinical trials to clinical institutions. Thus, research institutions may cooperate to enhance clinical practice. Different health standards and clinical terminologies has been used in both INTEGRATE and EURECA projects, e.g. HL7 or SNOMED-CT. These tools have been adapted to the projects data requirements. Clinical data are represented by unique concepts, avoiding ambiguity problems. The student has been working in the Biomedical Informatics Group from UPM, partner of the INTEGRATE and EURECA projects. The tool developed aims to perform homogenization tasks over information stored in databases of the project, through normalized representation provided by the SNOMED-CT terminology. The data query is executed against the normalized version of the databases, since the information retrieved will be more informative than non-normalized databases. The project has been performed from September 12th of 2014, when initiation stage began, to January 5th of 2015, when the final report was finished. The waterfall model for software development was followed during the working process. Therefore, a phase may not start before the previous one finishes and has been validated, except from the final report redaction, which has been carried out in parallel with the others phases. The tasks that have been developed and time for each one are detailed as follows: Training. Gathering the necessary information to develop the tool [30 hours]. Software requirement specification. Requirements the tool must accomplish [20 hours]. Design. Decisions on the design of the tool [35 hours]. Implementation. Tool development [80 hours]. Testing. Tool evaluation within the framework of the INTEGRATE and EURECA projects [70 hours]. Debugging. Improve efficiency and correct errors [45 hours]. Documenting. Final report elaboration [44 hours].
Resumo:
Abstract The development of cognitive robots needs a strong “sensorial” support which should allow it to perceive the real world for interacting with it properly. Therefore the development of efficient visual-processing software to be equipped in effective artificial agents is a must. In this project we study and develop a visual-processing software that will work as the “eyes” of a cognitive robot. This software performs a three-dimensional mapping of the robot’s environment, providing it with the essential information required to make proper decisions during its navigation. Due to the complexity of this objective we have adopted the Scrum methodology in order to achieve an agile development process, which has allowed us to correct and improve in a fast way the successive versions of the product. The present project is structured in Sprints, which cover the different stages of the software development based on the requirements imposed by the robot and its real necessities. We have initially explored different commercial devices oriented to the acquisition of the required visual information, adopting the Kinect Sensor camera (Microsoft) as the most suitable option. Later on, we have studied the available software to manage the obtained visual information as well as its integration with the robot’s software, choosing the high-level platform Matlab as the common nexus to join the management of the camera, the management of the robot and the implementation of the behavioral algorithms. During the last stages the software has been developed to include the fundamental functionalities required to process the real environment, such as depth representation, segmentation, and clustering. Finally the software has been optimized to exhibit real-time processing and a suitable performance to fulfill the robot’s requirements during its operation in real situations.