969 resultados para Driven Development
Resumo:
In a world that is increasingly working with software, the need arises for effective approaches that encourage software reuse. The reuse practice must be aligned to a set of practices, procedures and methodologies that create a stable and high quality product. These questions produce new styles and approaches in the software engineering. In this way, this thesis aims to address concepts related to development and model-driven architecture. The model-driven approach provides significant aspects of the automated development, which helps it with produced models built in the specification phase. The definition of terms such as model, architecture and platform makes the focus becomes clearer, because for MDA and MDD is important to split between technical and business issues. Important processes are covered, so you can highlight the artifacts that are built into each stage of model-driven development. The stages of development: CSM, PIM, PSM and ISM, detailing the purpose of each phase in oriented models, making the end of each stage are gradually produced artifacts that may be specialized. The models are handled by different prospects for modeling, abstracting the concepts and building a set of details that portrays a specific scenario. This retraction can be a graphical or textual representation, however, in most cases is chosen a language modeling, for example, UML. In order to provide a practical view, this dissertation shows some tools that improve the construction of models and the code generate that assists in the development, keeping the documentation systemic. Finally, the paper presents a case study that refers to the theoretical aspects discussed throughout the dissertation, therefore it is expected that the architecture and the model-driven development may be able to explain important features to consider in software engineering
Resumo:
The Software Engineering originated with the motivation to mass produce components for increased productivity in production systems. Since its origins, numerous studies have been proposed on the subject as new features in the creation of systems, like the Object- Oriented Programming and Aspect-Oriented Programming, have been established and methodologies have been developed to control them efficiently. However, years of studies in the area were not sufficient to create a methodology for reusing software artifacts really efficient and easy enough to be widespread. Given this, the Model-Driven Development (MDD) is trying to promote it using the modeling of systems as a reference, becoming part of it and establishing a huge productivity gain. One of his approaches is called Model-Driven Software Development (MDSD), which focuses on improving the practices and systems development using Domain-Specific Languages (DSL) for this purpose. In this Final Paper, Xtext is used as a tool to prove the productivity and efficiency of this approach, and for that bibliographic studies were made on the approach and the tool, and show the methodology and a case study to demonstrate results and conclusions regarding this work
Resumo:
Models are becoming increasingly important in the software development process. As a consequence, the number of models being used is increasing, and so is the need for efficient mechanisms to search them. Various existing search engines could be used for this purpose, but they lack features to properly search models, mainly because they are strongly focused on text-based search. This paper presents Moogle, a model search engine that uses metamodeling information to create richer search indexes and to allow more complex queries to be performed. The paper also presents the results of an evaluation of Moogle, which showed that the metamodel information improves the accuracy of the search.
Resumo:
[ES] El presente trabajo describe el desarrollo de una aplicación para el registro de las horas de docencia impartidas por el profesorado en la universidad. Con esto se persigue tener la información digitalizada para agilizar las gestiones que se tengan que realizar con ella. Por el lado del profesorado, se enviarán notificaciones vía correo electrónico para confirmar la docencia firmada, a modo de registro personal para que el profesor sepa la docencia que ha impartido y, en caso de sustitución, que también el profesor sustituido tenga constancia de la sustitución. El desarrollo se hará apoyándose en métodos ágiles, utilizando el desarrollo guiado por pruebas los módulos del modelo y persistencia.
Resumo:
The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBENDER, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.
Resumo:
Service-Oriented Architectures (SOA), and Web Services (WS), the technology generally used to implement them, achieve the integration of heterogeneous technologies, providing interoperability, and yielding the reutilization of pre-existent systems. Model-driven development methodologies provide inherent benefits such as increased productivity, greater reuse, and better maintainability, to name a few. Efforts on achieving model-driven development of SOAs already exist, but there is currently no standard solution that addresses non-functional aspects of these services as well. This paper presents an approach to integrate these non-functional aspects in the development of web services, with an emphasis on security.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Because of the growing availability of third-party APIs, services, widgets and any other reusable web component, mashup developers now face a vast amount of candidate components for their developments. Moreover, these components quite often are scattered in many different repositories and web sites, which makes difficult their selection or discovery. In this paper, we discuss the problem of component selection in Service-Oriented Architectures (SOA) and Mashup-Driven Development, and introduce the Linked Mashups Ontology (LiMOn), a model that allows describing mashups and their components for integrating and sharing mashup information such as categorization or dependencies. The model has allowed the building of an integrated, centralized metadirectory of web components for query and selection, which has served to evaluate the model. The metadirectory allows accessing various heterogeneous repositories of mashups and web components while using external information from the Linked Data cloud, helping mashup development.
Resumo:
Software Product Line Engineering has significant advantages in family-based software development. The common and variable structure for all products of a family is defined through a Product-Line Architecture (PLA) that consists of a common set of reusable components and connectors which can be configured to build the different products. The design of PLA requires solutions for capturing such configuration (variability). The Flexible-PLA Model is a solution that supports the specification of external variability of the PLA configuration, as well as internal variability of components. However, a complete support for product-line development requires translating architecture specifications into code. This complex task needs automation to avoid human error. Since Model-Driven Development allows automatic code generation from models, this paper presents a solution to automatically generate AspectJ code from Flexible-PLA models previously configured to derive specific products. This solution is supported by a modeling framework and validated in a software factory.
Resumo:
Objetivo: Este Trabajo Fin de Máster (TFM) analiza la importancia de la información poblacional en el análisis de datos de experimentos en IS. Por otro lado se intenta abordar un estudio sobre las posibles metodologías que se pueden usar para realizar dicho análisis. Contexto: El análisis se realiza sobre los resultados de un experimento llevado a cabo por el Grupo de Investigación de IS empírica de la UPM, cuyo objetivo es analizar el impacto que tiene el uso de TDD (Test Driven Development) sobre la calidad y la productividad del desarrollo software en comparación con el desarrollo tradicional TLD. Método de Investigación: Se analizan ocho variables demográficas frente a tres variables respuesta. Las metodologías o técnicas de análisis que se revisan son la Dicotomización, la Correlación de Pearson, la regresión lineal múltiple y la stepwise regression. Resultados: No se encuentran evidencias claras para afirmar que las variables demográficas influyen en los resultados de los experimentos. No obstante los resultados no son del todo concluyentes y queda abierta la investigación a realizarse con una muestra más amplia y representativa. En relación a la metodología de análisis aplicada, la dicotomización y la correlación de Pearson presentan deficiencias que se solventan con la regresión lineal múltiple y la stepwise regression. Conclusión: Resulta de vital importancia encontrar evidencias de la influencia de las características demográficas de los sujetos experimentales en el análisis de los datos experimentos en IS. Se ha encontrado un buen método para analizar esta influencia, pero falta replicar este análisis a más experimentos de IS para obtener resultados mejor fundados.---ABSTRACT---Objective: This Master's Thesis (TFM) discusses the importance of demographic data in the analysis of data from experiments in SE. On the other hand, it attempts to address a study of the possible methodologies that can be used to perform the analysis. Context: The analysis is performed on the results of an experiment conducted by the ESE Research Group of the UPM, aimed at analyzing the impact of the use of TDD (Test Driven Development) on quality and productivity, compared to traditional development TLD (Test Last Development). Research Method: Eight demographic variables were analyzed against three response variables. The methodologies and analysis techniques that are reviewed include dichotomization, Pearson correlation, multiple linear regression and stepwise regression. Results: There is not clear evidence to say that demographic variables influence the results of SE experiments. However the results are not conclusive, and are open to research with a broader and more representative sample. Regarding the applied analysis methodology, dichotomization and Pearson correlation have deficiencies that are solved with multiple linear regression and stepwise regression. Conclusion: It is very important to find evidence on the influence of demographic characteristics of subjects in the data analysis of SE experiments. We found a good way to analyze this influence, but is necessary replicate this analysis on more SE experiments to obtain sound results.
Resumo:
Los sistemas de búsqueda de respuestas (BR) se pueden considerar como potenciales sucesores de los buscadores tradicionales de información en la Web. Para que sean precisos deben adaptarse a dominios concretos mediante el uso de recursos semánticos adecuados. La adaptación no es una tarea trivial, ya que deben integrarse e incorporarse a sistemas de BR existentes varios recursos heterogéneos relacionados con un dominio restringido. Se presenta la herramienta Maraqa, cuya novedad radica en el uso de técnicas de ingeniería del software, como el desarrollo dirigido por modelos, para automatizar dicho proceso de adaptación a dominios restringidos. Se ha evaluado Maraqa mediante una serie de experimentos (sobre el dominio agrícola) que demuestran su viabilidad, mejorando en un 29,5% la precisión del sistema adaptado.
Resumo:
This study explored the relationship between social fund projects and poverty reduction in selected communities in Jamaica. The Caribbean nation's social fund projects aim to reduce “public” poverty by rehabilitating and expanding social and economic infrastructure, improving social services, and strengthening organizations at the community level. Research questions addressed the characteristics of poverty-focused social fund projects; the nexus between poverty reduction and three key concepts suggested by the literature— community (citizen) participation, social capital, and empowerment; and the impact of the projects on poverty. ^ In this qualitative study, data were collected and triangulated by means of in-depth, semi-structured interviews, supplemented by key informant data; non-participant observation; and document reviews. Thirty-four respondents were interviewed individually at eight rural and urban sites over a period of four consecutive months, and 10 key informants provided supplementary data. Open, axial, and selective coding was used for data reduction and analysis as part of the grounded theory method, which included constant comparative analysis. The codes generated a set of themes and a substantive-formal theory. Findings were crosschecked with interview respondents and key informants and validated by means of an audit trail. ^ The results have revealed that the approach to poverty reduction in social fund-supported communities is a process of development-focused collaboration among various stakeholders. The process encompasses four stages: (1) identifying problems and priorities, (2) motivating and mobilizing, (3) working together, and (4) creating an enabling environment. The underlying stakeholder involvement theory posits that collaboration increases the productivity of resources and creates the conditions for community-driven development. In addition, the study has found that social fund projects are largely community-based, collaborative, and highly participatory in their implementation, as well as prescription-driven, results-oriented, and leadership-dependent. Further, social capital formation across communities was found to be limited, and in general, the projects have been enabling rather than empowering. The projects have not reduced poverty per se; however, they have been instrumental in improving conditions that were concomitants of poverty. ^
Resumo:
In Model-Driven Engineering (MDE), the developer creates a model using a language such as Unified Modeling Language (UML) or UML for Real-Time (UML-RT) and uses tools such as Papyrus or Papyrus-RT that generate code for them based on the model they create. Tracing allows developers to get insights such as which events occur and timing information into their own application as it runs. We try to add monitoring capabilities using Linux Trace Toolkit: next generation (LTTng) to models created in UML-RT using Papyrus-RT. The implementation requires changing the code generator to add tracing statements for the events that the user wants to monitor to the generated code. We also change the makefile to automate the build process and we create an Extensible Markup Language (XML) file that allows developers to view their traces visually using Trace Compass, an Eclipse-based trace viewing tool. Finally, we validate our results using three models we create and trace.
Resumo:
Automated acceptance testing is the testing of software done in higher level to test whether the system abides by the requirements desired by the business clients by the use of piece of script other than the software itself. This project is a study of the feasibility of acceptance tests written in Behavior Driven Development principle. The project includes an implementation part where automated accep- tance testing is written for Touch-point web application developed by Dewire (a software consultant company) for Telia (a telecom company) from the require- ments received from the customer (Telia). The automated acceptance testing is in Cucumber-Selenium framework which enforces Behavior Driven Development principles. The purpose of the implementation is to verify the practicability of this style of acceptance testing. From the completion of implementation, it was concluded that all the requirements from customer in real world can be converted into executable specifications and the process was not at all time-consuming or difficult for a low-experienced programmer like the author itself. The project also includes survey to measure the learnability and understandability of Gherkin- the language that Cucumber understands. The survey consist of some Gherkin exam- ples followed with questions that include making changes to the Gherkin exam- ples. Survey had 3 parts: first being easy, second medium and third most difficult. Survey also had a linear scale from 1 to 5 to rate the difficulty level for each part of the survey. 1 stood for very easy and 5 for very difficult. Time when the partic- ipants began the survey was also taken in order to calculate the total time taken by the participants to learn and answer the questions. Survey was taken by 18 of the employers of Dewire who had primary working role as one of the programmer, tester and project manager. In the result, tester and project manager were grouped as non-programmer. The survey concluded that it is very easy and quick to learn Gherkin. While the participants rated Gherkin as very easy.
Resumo:
Dissertation presented to obtain the Ph.D degree in Chemistry