91 resultados para Capability Maturity Model for Software


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the field of Room Acoustics it is common using scale models to study a room. Through this method it is possible to predict its behavior, which may be very useful to detect and correct any problem prior to build it, saving many resources. Nowadays this method has been relegated to a secondary position due to the peak of simulation software, which makes possible studying rooms in a cheap, flexible and simple way, as well as it is potentially less time consuming. Nevertheless, the scale model method is still under study, as it may give some additional information. This project intends to focus in pedagogic possibilities of the scale model method. This method offers the student the opportunity of study and grasp some of the most important phenomena in Room Acoustics, in a more intuitive way than just a software simulation. Furthermore most of the existing software in this field is aimed to the technician working in the lab, as efficiently as possible, not to the student trying to understand and learn something. Here, the facilities and resources of Syddansk Universitet regarding this matter will be studied and evaluated, as well as the procedure for the experiments, paying special attention not only to its reliability and accuracy, but also to its didactic possibilities. Besides, if possible, any improvement that could help to enhance any of the listed aspects will be suggested. En el ámbito de la Acústica Arquitectónica es común el uso de modelos a escala para estudiar un recinto determinado. Mediante esta técnica es posible por ejemplo predecir el comportamiento del recinto y detectar problemas antes de su construcción, con el consecuente ahorro de recursos. Actualmente el uso de modelos a escala está desplazado a un segundo plano por el uso de software simulación, debido a la sencillez y flexibilidad que puede aportar la simulación por ordenador, así como a la economía de tiempo y recursos que supone. Sin embargo sigue siendo objeto de estudio, dado que puede aportar información muy valiosa para el ingeniero. Este proyecto se centra en las posibilidades pedagógicas de dicho método. El uso de modelos a escala brinda la oportunidad a los estudiantes de estudiar y comprender algunos de los fenómenos más importantes en la Acústica Arquitectónica de una forma más directa e intuitiva que una simulación por ordenador. Se pretende estudiar y evaluar los medios al alcance de los estudiantes en la Syddansk Universitet, así como los métodos usados, atendiendo no sólo a su precisión y fiabilidad, si no a su potencial pedagógico. Así mismo, si es posible, se propondrán cambios que puedan suponer una mejora en cualquiera de estos aspectos. Así el proyecto se divide en varias secciones claramente diferenciadas. En el apartado Background and Theoretical Basis se introduce el tema del estudio y simulación de recintos acústicos. Se explica su importancia y utilidad, y se comenta la situación actual de estas técnicas, abordando diferentes métodos usados así como sus bases teóricas y principales ventajas e inconvenientes. Bajo el apartado de Project se analizan diferentes factores relacionados con el problema. Se estudian los recursos a disposición del alumno, desde el software y hardware implicados hasta el equipo de medida y otros recursos necesarios para la realización de las prácticas. Es en esta parte donde se centra la parte más importante del trabajo, consistente en la medición y comprobación de las características más relevantes del equipo implicado. Haciendo posible así confirmar su validez y precisión, tanto desde el punto de vista técnico como pedagógico, así como estableciendo los límites dentro de los que se puede considerar fiable el modelo. Al final de este apartado se aborda la influencia de la absorción del aire en altas frecuencias, y la variación en los coeficientes de absorción y dispersión de los materiales respecto de la frecuencia. Por último se realiza una verificación subjetiva del sistema completo, debido a que por limitaciones técnicas no ha sido posible evaluar el montaje en el rango equivalente a toda la banda audible, y que los métodos estudiados tienen como meta última asegurar una buena percepción por parte del oyente en el recinto dado. Dentro del apartado Conclusions se hace un breve resumen de las conclusiones extraídas anteriormente, y se valora el rendimiento y utilidad general del modelo, que a pesar de algunos problemas de precisión y repetibilidad lógicos debido a los medios usados, es válido para ilustrar los fenómenos físicos que se quieren enseñar al alumno. En la sección de Future Work se proponen diferentes vías de trabajo para futuros proyectos en la Syddansk Universitet que podrían ser útiles confirmar el trabajo realizado en este proyecto, mejorar la precisión y fiabilidad del montaje o enriquecer las posibilidades pedagógicas de las prácticas relacionadas. Por último se encuentra, tras el apartado de referencias, los anexos con tablas y gráficas relativas a las medidas realizadas en diferentes partes del trabajo. También se puede encontrar información y material relacionado con el proyecto en el CD adjunto.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last century many researches on the business, marketing and technology fields have developed the innovation research line and large amount of knowledge can be found in the literature. Currently, the importance of systematic and openness approaches to manage the available innovation sources is well established in many knowledge fields. Also in the software engineering sector, where the organizations need to absorb and to exploit as much innovative ideas as possible to get success in the current competitive environment. This Master Thesis presents an study related with the innovation sources in the software engineering eld. The main research goals of this work are the identication and the relevance assessment of the available innovation sources and the understanding of the trends on the innovation sources usage. Firstly, a general review of the literature have been conducted in order to define the research area and to identify research gaps. Secondly, the Systematic Literature Review (SLR) has been proposed as the research method in this work to report reliable conclusions collecting systematically quality evidences about the innovation sources in software engineering field. This contribution provides resources, built-on empirical studies included in the SLR, to support a systematic identication and an adequate exploitation of the innovation sources most suitable in the software engineering field. Several artefacts such as lists, taxonomies and relevance assessments of the innovation sources most suitable for software engineering have been built, and their usage trends in the last decades and their particularities on some countries and knowledge fields, especially on the software engineering, have been researched. This work can facilitate to researchers, managers and practitioners of innovative software organizations the systematization of critical activities on innovation processes like the identication and exploitation of the most suitable opportunities. Innovation researchers can use the results of this work to conduct research studies involving the innovation sources research area. Whereas, organization managers and software practitioners can use the provided outcomes in a systematic way to improve their innovation capability, increasing consequently the value creation in the processes that they run to provide products and services useful to their environment. In summary, this Master Thesis research the innovation sources in the software engineering field, providing useful resources to support an effective innovation sources management. Moreover, several aspects should be deeply study to increase the accuracy of the presented results and to obtain more resources built-on empirical knowledge. It can be supported by the INno- vation SOurces MAnagement (InSoMa) framework, which is introduced in this work in order to encourage openness and systematic approaches to identify and to exploit the innovation sources in the software engineering field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En los últimos años la externalización de TI ha ganado mucha importancia en el mercado y, por ejemplo, el mercado externalización de servicios de TI sigue creciendo cada año. Ahora más que nunca, las organizaciones son cada vez más los compradores de las capacidades necesarias mediante la obtención de productos y servicios de los proveedores, desarrollando cada vez menos estas capacidades dentro de la empresa. La selección de proveedores de TI es un problema de decisión complejo. Los gerentes que enfrentan una decisión sobre la selección de proveedores de TI tienen dificultades en la elaboración de lo que hay que pensar, además en sus discursos. También de acuerdo con un estudio del SEI (Software Engineering Institute) [40], del 20 al 25 por ciento de los grandes proyectos de adquisición de TI fracasan en dos años y el 50 por ciento fracasan dentro de cinco años. La mala gestión, la mala definición de requisitos, la falta de evaluaciones exhaustivas, que pueden ser utilizadas para llegar a los mejores candidatos para la contratación externa, la selección de proveedores y los procesos de contratación inadecuados, la insuficiencia de procedimientos de selección tecnológicos, y los cambios de requisitos no controlados son factores que contribuyen al fracaso del proyecto. La mayoría de los fracasos podrían evitarse si el cliente aprendiese a comprender los problemas de decisión, hacer un mejor análisis de decisiones, y el buen juicio. El objetivo principal de este trabajo es el desarrollo de un modelo de decisión para la selección de proveedores de TI que tratará de reducir la cantidad de fracasos observados en las relaciones entre el cliente y el proveedor. La mayor parte de estos fracasos son causados por una mala selección, por parte del cliente, del proveedor. Además de estos problemas mostrados anteriormente, la motivación para crear este trabajo es la inexistencia de cualquier modelo de decisión basado en un multi modelo (mezcla de modelos adquisición y métodos de decisión) para el problema de la selección de proveedores de TI. En el caso de estudio, nueve empresas españolas fueron analizadas de acuerdo con el modelo de decisión para la selección de proveedores de TI desarrollado en este trabajo. Dos softwares se utilizaron en este estudio de caso: Expert Choice, y D-Sight. ABSTRACT In the past few years IT outsourcing has gained a lot of importance in the market and, for example, the IT services outsourcing market is still growing every year. Now more than ever, organizations are increasingly becoming acquirers of needed capabilities by obtaining products and services from suppliers and developing less and less of these capabilities in-house. IT supplier selection is a complex and opaque decision problem. Managers facing a decision about IT supplier selection have difficulty in framing what needs to be thought about further in their discourses. Also according to a study from SEI (Software Engineering Institute) [40], 20 to 25 percent of large information technology (IT) acquisition projects fail within two years and 50 percent fail within five years. Mismanagement, poor requirements definition, lack of comprehensive evaluations, which can be used to come up with the best candidates for outsourcing, inadequate supplier selection and contracting processes, insufficient technology selection procedures, and uncontrolled requirements changes are factors that contribute to project failure. The majority of project failures could be avoided if the acquirer learns how to understand the decision problems, make better decision analysis, and good judgment. The main objective of this work is the development of a decision model for IT supplier selection that will try to decrease the amount of failures seen in the relationships between the client-supplier. Most of these failures are caused by a not well selection of the supplier. Besides these problems showed above, the motivation to create this work is the inexistence of any decision model based on multi model (mixture of acquisition models and decision methods) for the problem of IT supplier selection. In the case study, nine different Spanish companies were analyzed based on the IT supplier selection decision model developed in this work. Two software products were used in this case study, Expert Choice and D-Sight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research addressed the development of a consolidated model designed especially to cover the security and usability attributes of a software product. As a starting point, we built a new usability model on the basis of well-known quality standards and models. We then used an existing security model to analyse the relationship between these two approaches. This analysis consisted of a systematic mapping study of the relationship between security and usability as global quality factors. We identified five relationship types: inverse, direct, relative, one-way inverse, and no relationship. Most authors agree that there is an inverse relationship between security and usability. However, this is not a unanimous finding, and this study unveils a number of open questions, like application domain dependency and the need to explore lower-level relationships between attribute subcharacteristics. In order to clarify the questions raised during the research, we conducted a second systematic mapping to further analyse the finer-grained structure of these factors, such as authentication as a subset of security and user efficiency as a subset of usability. The most relevant finding is that efficiency does not depend on the security level during the authentication process. There are other subfactors that require analysis. Accordingly, this research is the first part of a larger project to develop a full-blown consolidated model for security and usability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acourse focused on the acquisition of integration competencies in ship production engineering, organized in collaboration with selected industry partners, is presented in this paper. The first part of the course is dedicated to Project Management: the students acquire skills in defining, using MS-PROJECT, the work breakdown structure (WBS), and the organization breakdown structure (OBS) in Engineering projects, through a series of examples of increasing complexity with the final one being the construction planning of a vessel. The second part of the course is dedicated to the use of a database manager, MS-ACCESS, in managing production related information.Aseries of increasing complexity examples is treated, the final one being the management of the piping database of a real vessel. This database consists of several thousand pipes, for which a production timing frame is defined connecting this part of the course with the first one. Finally, the third part of the course is devoted to working withFORAN,an Engineering Production application developed bySENERand widely used in the shipbuilding industry. With this application, the structural elements where all the outfittings will be located are defined through cooperative work by the students, working simultaneously in the same 3D model. In this paper, specific details about the learning process are given. Surveys have been posed to the students in order to get feedback from their experience as well as to assess their satisfaction with the learning process, compared to more traditional ones. Results from these surveys are discussed in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated and semi-automated accessibility evaluation tools are key to streamline the process of accessibility assessment, and ultimately ensure that software products, contents, and services meet accessibility requirements. Different evaluation tools may better fit different needs and concerns, accounting for a variety of corporate and external policies, content types, invocation methods, deployment contexts, exploitation models, intended audiences and goals; and the specific overall process where they are introduced. This has led to the proliferation of many evaluation tools tailored to specific contexts. However, tool creators, who may be not familiar with the realm of accessibility and may be part of a larger project, lack any systematic guidance when facing the implementation of accessibility evaluation functionalities. Herein we present a systematic approach to the development of accessibility evaluation tools, leveraging the different artifacts and activities of a standardized development process model (the Unified Software Development Process), and providing templates of these artifacts tailored to accessibility evaluation tools. The work presented specially considers the work in progress in this area by the W3C/WAI Evaluation and Report Working Group (ERT WG)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software Product Line Engineering has significant advantages in family-based software development. The common and variable structure for all products of a family is defined through a Product-Line Architecture (PLA) that consists of a common set of reusable components and connectors which can be configured to build the different products. The design of PLA requires solutions for capturing such configuration (variability). The Flexible-PLA Model is a solution that supports the specification of external variability of the PLA configuration, as well as internal variability of components. However, a complete support for product-line development requires translating architecture specifications into code. This complex task needs automation to avoid human error. Since Model-Driven Development allows automatic code generation from models, this paper presents a solution to automatically generate AspectJ code from Flexible-PLA models previously configured to derive specific products. This solution is supported by a modeling framework and validated in a software factory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years, technical debt has been used as a useful means for making the intrinsic cost of the internal software quality weaknesses visible. This visibility is made possible by quantifying this cost. Specifically, technical debt is expressed in terms of two main concepts: principal and interest. The principal is the cost of eliminating or reducing the impact of a, so called, technical debt item in a software system; whereas the interest is the recurring cost, over a time period, of not eliminating a technical debt item. Previous works about technical debt are mainly focused on estimating principal and interest, and on performing a cost-benefit analysis. This cost-benefit analysis allows one to determine if to remove technical debt is profitable and to prioritize which items incurring in technical debt should be fixed first. Nevertheless, for these previous works technical debt is flat along the time. However the introduction of new factors to estimate technical debt may produce non flat models that allow us to produce more accurate predictions. These factors should be used to estimate principal and interest, and to perform cost-benefit analysis related to technical debt. In this paper, we take a step forward introducing the uncertainty about the interest, and the time frame factors so that it becomes possible to depict a number of possible future scenarios. Estimations obtained without considering the possible evolution of the interest over time may be less accurate as they consider simplistic scenarios without changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Software Product Line (SPL) engineering [1] has been widely-adopted in software development due to the significant improvements that has provided, such as reducing cost and time-to-market and providing flexibility to respond to planned changes [2]. SPL takes advantage of common features among the products of a family through the systematic reuse of the core-assets and the effective management of variabilities across the products. SPL features are realized at the architectural level in product-line architecture (PLA) models. Therefore, suitable modeling and specification techniques are required to model variability. In fact, architectural variability modeling has become a challenge for SPLE due to the fact that PLA modeling requires not only modeling variability at the level of the external architecture configuration (see [3,4] literature reviews), but also at the level of internal specification of components [5]. In addition, PLA modeling requires preserving the traceability between features and PLAs. Finally, it is important to take into account that PLA modeling should guide architects in modeling the PLA core assets and variability, and in deriving the customized products. To deal with these needs, we present in this demonstration the FPLA Modeling Framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accepted fact in software engineering is that software must undergo verification and validation process during development to ascertain and improve its quality level. But there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products. Though, some knowledge is available on the strengths and weaknesses of the available software quality assurance techniques but not much is known yet on the relationship between different techniques and contextual behavior of the techniques. Objective: This research investigates the effectiveness of two testing techniques ? equivalence class partitioning and decision coverage and one review technique ? code review by abstraction, in terms of their fault detection capability. This will be used to strengthen the practical knowledge available on these techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Software Engineering (SE) community has historically focused on working with models to represent functionality and persistence, pushing interaction modelling into the background, which has been covered by the Human Computer Interaction (HCI) community. Recently, adequately modelling interaction, and specifically usability, is being considered as a key factor for success in user acceptance, making the integration of the SE and HCI communities more necessary. If we focus on the Model-Driven Development (MDD) paradigm, we notice that there is a lack of proposals to deal with usability features from the very first steps of software development process. In general, usability features are manually implemented once the code has been generated from models. This contradicts the MDD paradigm, which claims that all the analysts? effort must be focused on building models, and the code generation is relegated to model to code transformations. Moreover, usability features related to functionality may involve important changes in the system architecture if they are not considered from the early steps. We state that these usability features related to functionality can be represented abstractly in a conceptual model, and their implementation can be carried out automatically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a fuzzy based Variable Structure Control (VSC) with guaranteed stability is presented. The main objective is to obtain an improved performance of highly non-linear unstable systems. The main contribution of this work is that, firstly, new functions for chattering reduction and error convergence without sacrificing invariant properties are proposed, which is considered the main drawback of the VSC control. Secondly, the global stability of the controlled system is guaranteed.The well known weighting parameters approach, is used in this paper to optimize local and global approximation and modeling capability of T-S fuzzy model.A one link robot is chosen as a nonlinear unstable system to evaluate the robustness, effectiveness and remarkable performance of optimization approach and the high accuracy obtained in approximating nonlinear systems in comparison with the original T-S model. Simulation results indicate the potential and generality of the algorithm. The application of the proposed FLC-VSC shows that both alleviation of chattering and robust performance are achieved with the proposed FLC-VSC controller. The effectiveness of the proposed controller is proven in front of disturbances and noise effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a fuzzy logic controller (FLC) based variable structure control (VSC) is presented. The main objective is to obtain an improved performance of highly non-linear unstable systems. New functions for chattering reduction and error convergence without sacrificing invariant properties are proposed. The main feature of the proposed method is that the switching function is added as an additional fuzzy variable and will be introduced in the premise part of the fuzzy rules; together with the state variables. In this work, a tuning of the well known weighting parameters approach is proposed to optimize local and global approximation and modelling capability of the Takagi-Sugeno (T-S) fuzzy model to improve the choice of the performance index and minimize it. The main problem encountered is that the T-S identification method can not be applied when the membership functions are overlapped by pairs. This in turn restricts the application of the T-S method because this type of membership function has been widely used in control applications. The approach developed here can be considered as a generalized version of the T-S method. An inverted pendulum mounted on a cart is chosen to evaluate the robustness, effectiveness, accuracy and remarkable performance of the proposed estimation approach in comparison with the original T-S model. Simulation results indicate the potential, simplicity and generality of the estimation method and the robustness of the chattering reduction algorithm. In this paper, we prove that the proposed estimation algorithm converge the very fast, thereby making it very practical to use. The application of the proposed FLC-VSC shows that both alleviation of chattering and robust performance are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following the Integrated Water Resources Management approach, the European Water Framework Directive demands Member States to develop water management plans at the catchment level. Those plans have to integrate the different interests and must be developed with stakeholder participation. To face these requirements, managers need tools to assess the impacts of possible management alternatives on natural and socio-economic systems. These tools should ideally be able to address the complexity and uncertainties of the water system, while serving as a platform for stakeholder participation. The objective of our research was to develop a participatory integrated assessment model, based on the combination of a crop model, an economic model and a participatory Bayesian network, with an application in the middle Guadiana sub-basin, in Spain. The methodology is intended to capture the complexity of water management problems, incorporating the relevant sectors, as well as the relevant scales involved in water management decision making. The integrated model has allowed us testing different management, market and climate change scenarios and assessing the impacts of such scenarios on the natural system (crops), on the socio-economic system (farms) and on the environment (water resources). Finally, this integrated assessment modelling process has allowed stakeholder participation, complying with the main requirements of current European water laws.