963 resultados para software quality attribute
Resumo:
Contexto: La presente tesis doctoral se enmarca en la actividad de educción de los requisitos. La educción de requisitos es generalmente aceptada como una de las actividades más importantes dentro del proceso de Ingeniería de Requisitos, y tiene un impacto directo en la calidad del software. Es una actividad donde la comunicación entre los involucrados (analistas, clientes, usuarios) es primordial. La efectividad y eficacia del analista en la compresión de las necesidades de clientes y usuarios es un factor crítico para el éxito del desarrollo de software. La literatura se ha centrado principalmente en estudiar y comprender un conjunto específico de capacidades o habilidades personales que debe poseer el analista para realizar de forma efectiva la actividad de educción. Sin embargo, existen muy pocos trabajos que han estudiado dichas capacidades o habilidades empíricamente. Objetivo: La presente investigación tiene por objetivo estudiar el efecto de la experiencia, el conocimiento acerca del dominio y la titulación académica que poseen los analistas en la efectividad del proceso de educción de los requisitos, durante los primeros contactos del analista con el cliente. Método de Investigación: Hemos ejecutado 8 estudios empíricos entre cuasi-experimentos (4) y experimentos controlados (4). Un total de 110 sujetos experimentales han participado en los estudios, entre estudiantes de post-grado de la Escuela Técnica Superior de Ingenieros Informáticos de la Universidad Politécnica de Madrid y profesionales. La tarea experimental consistió en realizar sesiones de educción de requisitos sobre uno o más dominios de problemas (de carácter conocido y desconocido para los sujetos). Las sesiones de educción se realizaron empleando la entrevista abierta. Finalizada la entrevista, los sujetos reportaron por escrito toda la información adquirida. Resultados: Para dominios desconocidos, la experiencia (entrevistas, requisitos, desarrollo y profesional) del analista no influye en su efectividad. En dominios conocidos, la experiencia en entrevistas (r = 0.34, p-valor = 0.080) y la experiencia en requisitos (r = 0.22, p-valor = 0.279), ejercen un efecto positivo. Esto es, los analistas con más años de experiencia en entrevistas y/o requisitos tienden a alcanzar mejores efectividades. Por el contrario, la experiencia en desarrollo (r = -0.06, p-valor = 0.765) y la experiencia profesional (r = -0.35, p-valor = 0.077), tienden a ejercer un efecto nulo y negativo, respectivamente. En lo que respecta al conocimiento acerca del dominio del problema que poseen los analistas, ejerce un moderado efecto positivo (r=0.31), estadísticamente significativo (p-valor = 0.029) en la efectividad de la actividad de educción. Esto es, los analistas con conocimiento tienden a ser más efectivos en los dominios de problema conocidos. En lo que respecta a la titulación académica, por falta de diversidad en las titulaciones académicas de los sujetos experimentales no es posible alcanzar una conclusión. Hemos podido explorar el efecto de la titulación académica en sólo dos cuasi-experimentos, sin embargo, nuestros resultados arrojan efectos contradictorios (r = 0.694, p-valor = 0.51 y r = -0.266, p-valor = 0.383). Además de las variables estudiadas indicadas anteriormente, hemos confirmado la existencia de variables moderadoras que afectan a la actividad de educción, tales como el entrevistado o la formación. Nuestros datos experimentales confirman que el entrevistado es un factor clave en la actividad de educción. Estadísticamente ejerce una influencia significativa en la efectividad de los analistas (p-valor= 0.000). La diferencia entre entrevistar a uno u otro entrevistado, en unidades naturales, varía entre un 18% - 23% en efectividad. Por otro lado, la formación en requisitos aumenta considerablemente la efectividad de los analistas. Los sujetos que realizaron la educción de requisitos después de recibir una formación específica en requisitos tienden a ser entre un 12% y 20% más efectivos que aquellos que no la recibieron. El efecto es significativo (p-valor = 0.000). Finalmente, hemos observado tres hechos que podrían influir en los resultados de esta investigación. En primer lugar, la efectividad de los analistas es diferencial dependiendo del tipo de elemento del dominio. En dominios conocidos, los analistas con experiencia tienden a adquirir más conceptos que los analistas noveles. En los dominios desconocidos, son los procesos los que se adquieren de forma prominente. En segundo lugar, los analistas llegan a una especie de “techo de cristal” que no les permite adquirir más información. Es decir, el analista sólo reconoce (parte de) los elementos del dominio del problema mencionado. Este hecho se observa tanto en el dominio de problema desconocido como en el conocido, y parece estar relacionado con el modo en que los analistas exploran el dominio del problema. En tercer lugar, aunque los años de experiencia no parecen predecir cuán efectivo será un analista, sí parecen asegurar que un analista con cierta experiencia, en general, tendrá una efectividad mínima que será superior a la efectividad mínima de los analistas con menos experiencia. Conclusiones: Los resultados obtenidos muestran que en dominios desconocidos, la experiencia por sí misma no determina la efectividad de los analistas de requisitos. En dominios conocidos, la efectividad de los analistas se ve influenciada por su experiencia en entrevistas y requisitos, aunque sólo parcialmente. Otras variables influyen en la efectividad de los analistas, como podrían ser las habilidades débiles. El conocimiento del dominio del problema por parte del analista ejerce un efecto positivo en la efectividad de los analistas, e interacciona positivamente con la experiencia incrementando aún más la efectividad de los analistas. Si bien no fue posible obtener conclusiones sólidas respecto al efecto de la titulación académica, si parece claro que la formación específica en requisitos ejerce una importante influencia positiva en la efectividad de los analistas. Finalmente, el analista no es el único factor relevante en la actividad de educción. Los clientes/usuarios (entrevistados) también juegan un rol importante en el proceso de generación de información. ABSTRACT Context: This PhD dissertation addresses requirements elicitation activity. Requirements elicitation is generally acknowledged as one of the most important activities of the requirements process, having a direct impact in the software quality. It is an activity where the communication among stakeholders (analysts, customers, users) is paramount. The analyst’s ability to effectively understand customers/users’ needs represents a critical factor for the success of software development. The literature has focused on studying and comprehending a specific set of personal skills that the analyst must have to perform requirements elicitation effectively. However, few studies have explored those skills from an empirical viewpoint. Goal: This research aims to study the effects of experience, domain knowledge and academic qualifications on the analysts’ effectiveness when performing requirements elicitation, during the first stages of analyst-customer interaction. Research method: We have conducted eight empirical studies, quasi-experiments (four) and controlled experiments (four). 110 experimental subjects participated, including: graduate students with the Escuela Técnica Superior de Ingenieros Informáticos of the Universidad Politécnica de Madrid, as well as researchers and professionals. The experimental tasks consisted in elicitation sessions about one or several problem domains (ignorant and/or aware for the subjects). Elicitation sessions were conducted using unstructured interviews. After each interview, the subjects reported in written all collected information. Results: In ignorant domains, the analyst’s experience (interviews, requirements, development and professional) does not influence her effectiveness. In aware domains, interviewing experience (r = 0.34, p-value = 0.080) and requirements experience (r = 0.22, p-value = 0.279), make a positive effect, i.e.: the analysts with more years of interviewing/requirements experience tend to achieve higher effectiveness. On the other hand, development experience (r = -0.06, p-value = 0.765) and professional experience (r = -0.35, p-value = 0.077) tend to make a null and negative effect, respectively. On what regards the analyst’s problem domain knowledge, it makes a modest positive effect (r=0.31), statistically significant (p-value = 0.029) on the effectiveness of the elicitation activity, i.e.: the analysts with tend to be more effective in problem domains they are aware of. On what regards academic qualification, due to the lack of diversity in the subjects’ academic degrees, we cannot come to a conclusion. We have been able to explore the effect of academic qualifications in two experiments; however, our results show opposed effects (r = 0.694, p-value = 0.51 y r = -0.266, p-value = 0.383). Besides the variables mentioned above, we have confirmed the existence of moderator variables influencing the elicitation activity, such as the interviewee and the training. Our data confirm that the interviewee is a key factor in the elicitation activity; it makes statistically significant effect on analysts’ effectiveness (p-value = 0.000). Interviewing one or another interviewee represents a difference in effectiveness of 18% - 23%, in natural units. On the other hand, requirements training increases to a large extent the analysts’ effectiveness. Those subjects who performed requirements elicitation after specific training tend to be 12% - 20% more effective than those who did not receive training. The effect is statistically significant (p-value = 0.000). Finally, we have observed three phenomena that could have an influence on the results of this research. First, the analysts’ effectiveness differs depending on domain element types. In aware domains, experienced analysts tend to capture more concepts than novices. In ignorant domains, processes are identified more frequently. Second, analysts get to a “glass ceiling” that prevents them to acquire more information, i.e.: analysts only identify (part of) the elements of the problem domain. This fact can be observed in both the ignorant and aware domains. Third, experience years do not look like a good predictor of how effective an analyst will be; however, they seem to guarantee that an analyst with some experience years will have a higher minimum effectiveness than the minimum effectiveness of analysts with fewer experience years. Conclusions: Our results point out that experience alone does not explain analysts’ effectiveness in ignorant domains. In aware domains, analysts’ effectiveness is influenced the experience in interviews and requirements, albeit partially. Other variables influence analysts’ effectiveness, e.g.: soft skills. The analysts’ problem domain knowledge makes a positive effect in analysts’ effectiveness; it positively interacts with the experience, increasing even further analysts’ effectiveness. Although we could not obtain solid conclusions on the effect of the academic qualifications, it is plain clear that specific requirements training makes a rather positive effect on analysts’ effectiveness. Finally, the analyst is not the only relevant factor in the elicitation activity. The customers/users (interviewees) play also an important role in the information generation process.
Resumo:
There is growing interest in the use of context-awareness as a technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of users. However, context-awareness introduces a variety of software engineering challenges. In this paper, we address these challenges by proposing a set of conceptual models designed to support the software engineering process, including context modelling techniques, a preference model for representing context-dependent requirements, and two programming models. We also present a software infrastructure and software engineering process that can be used in conjunction with our models. Finally, we discuss a case study that demonstrates the strengths of our models and software engineering approach with respect to a set of software quality metrics.
Resumo:
As a part of the activities of the first Symposium on Process Improvement Models and Software Quality of the Spanish Public Administration, working groups were formed to discuss the current state of the Requirements Management and Supplier Agreement Management processes. This article presents general results and main contributions of those working groups. The results have allowed the obtention of a preliminary appraisal of the current state of these two processes in the Spanish Public Administration.
Resumo:
The long-term foetal surveillance is often to be recommended. Hence, the fully non-invasive acoustic recording, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the recorded heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. In this paper, we present a new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings. A filtering is employed as a first step of the algorithm to reduce the background noise. A block for first heart sounds enhancing is then used to further reduce other components of foetal heart sound signals. A complex logic block, guided by a number of rules concerning foetal heart beat regularity, is proposed as a successive block, for the detection of most probable first heart sounds from several candidates. A final block is used for exact first heart sound timing and in turn foetal heart rate estimation. Filtering and enhancing blocks are actually implemented by means of different techniques, so that different processing paths are proposed. Furthermore, a reliability index is introduced to quantify the consistency of the estimated foetal heart rate and, based on statistic parameters; [,] a software quality index is designed to indicate the most reliable analysis procedure (that is, combining the best processing path and the most accurate time mark of the first heart sound, provides the lowest estimation errors). The algorithm performances have been tested on phonocardiographic signals recorded in a local gynaecology private practice from a sample group of about 50 pregnant women. Phonocardiographic signals have been recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by our algorithm and the other provided by cardiotocographic device). Our results show that the proposed algorithm, in particular some analysis procedures, provides reliable foetal heart rate signals, very close to the reference cardiotocographic recordings. © 2010 Elsevier Ltd. All rights reserved.
Resumo:
Software bug analysis is one of the most important activities in Software Quality. The rapid and correct implementation of the necessary repair influence both developers, who must leave the fully functioning software, and users, who need to perform their daily tasks. In this context, if there is an incorrect classification of bugs, there may be unwanted situations. One of the main factors to be assigned bugs in the act of its initial report is severity, which lives up to the urgency of correcting that problem. In this scenario, we identified in datasets with data extracted from five open source systems (Apache, Eclipse, Kernel, Mozilla and Open Office), that there is an irregular distribution of bugs with respect to existing severities, which is an early sign of misclassification. In the dataset analyzed, exists a rate of about 85% bugs being ranked with normal severity. Therefore, this classification rate can have a negative influence on software development context, where the misclassified bug can be allocated to a developer with little experience to solve it and thus the correction of the same may take longer, or even generate a incorrect implementation. Several studies in the literature have disregarded the normal bugs, working only with the portion of bugs considered severe or not severe initially. This work aimed to investigate this portion of the data, with the purpose of identifying whether the normal severity reflects the real impact and urgency, to investigate if there are bugs (initially classified as normal) that could be classified with other severity, and to assess if there are impacts for developers in this context. For this, an automatic classifier was developed, which was based on three algorithms (Näive Bayes, Max Ent and Winnow) to assess if normal severity is correct for the bugs categorized initially with this severity. The algorithms presented accuracy of about 80%, and showed that between 21% and 36% of the bugs should have been classified differently (depending on the algorithm), which represents somewhere between 70,000 and 130,000 bugs of the dataset.
Resumo:
Argon infiltration is a well-known problem of hot isostatic pressed components. Thus, the argon content is one quality attribute which is measured after a hot isostatic pressing (HIP) process. Since the Selective Laser Melting (SLM) process takes place under an inert argon atmosphere; it is imaginable that argon is entrapped in the component after SLM processing. Despite using optimized process parameters, defects like pores and shrink holes cannot be completely avoided. Especially, pores could be filled with process gas during the building process. Argon filled pores would clearly affect the mechanical properties. The present paper takes a closer look at the porosity in Inconel 718 samples, which were generated by means of SLM. Furthermore, the argon content of the powder feedstock, of samples made by means of SLM, of samples which were hot isostatic pressed after the SLM process, and of conventionally manufactured samples were measured and compared. The results showed an increased argon content in the Inconel 718 samples after SLM processing compared to conventional manufactured samples.
Resumo:
Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps. The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size. Sammanfattning: Ensidesapplikationer har historiskt sett påverkats av starka marknadskrafter som pådriver snabba utvecklingscykler och leveranser. Detta medför att kvalitetskontroll och förändringsbar kod, som är viktiga faktorer för förvaltningsbarhet, blir lidande. I denna rapport utvecklar vi två funktionellt ekvi-valenta ensidesapplikationer med AngularJS och React samt jämför dessa applikationers förvaltningsbarhet enligt ISO/IEC 9126. AngularJS och React representerar två distinkta angreppsätt på webbutveckling, där AngularJS är ett ramverk med mycket färdig funktionalitet och React ett mindre bibliotek specialiserat på vyrendering. Kvalitetsjämförelsen utfördes genom att beräkna förvaltningsbarhetsindex för respektive applikation. Versionshanteringsanalys användes för att bestämma andra kvalitetsindikatorer efter den initiala utvecklingen samt två efterföljande underhållsarbeten. Resultaten visar inga markanta skillnader i förvaltningsbarhet för de initiala applikationerna. I takt med att mer funktionalitet lades till sjönk förvaltnings-barhetsindex snabbare för AngularJS-applikationen, vilket motsvarar en kraftigare ökning i komplexitet jämfört med React-applikationen. Versionshanteringsanalys visar att ändringar i dataflödet kräver större modifikationer för AngularJS-applikationen på grund av dess förbestämda arkitektur. Utifrån detta drar vi slutsatsen att ramverk är användbara när de understödjer utvecklingen mot kända krav men att deras nytta blir begränsad ju mer en applikation växer i storlek.
Resumo:
The life cycle of software applications in general is very short and with extreme volatile requirements. Within these conditions programmers need development tools and techniques with an extreme level of productivity. We consider the code reuse as the most prominent approach to solve that problem. Our proposal uses the advantages provided by the Aspect-Oriented Programming in order to build a reusable framework capable to turn both programmer and application oblivious as far as data persistence is concerned, thus avoiding the need to write any line of code about that concern. Besides the benefits to productivity, the software quality increases. This paper describes the actual state of the art, identifying the main challenge to build a complete and reusable framework for Orthogonal Persistence in concurrent environments with support for transactions. The present work also includes a successfully developed prototype of that framework, capable of freeing the programmer of implementing any read or write data operations. This prototype is supported by an object oriented database and, in the future, will also use a relational database and have support for transactions.
Resumo:
Many municipal activities require updated large-scale maps that include both topographic and thematic information. For this purpose, the efficient use of very high spatial resolution (VHR) satellite imagery suggests the development of approaches that enable a timely discrimination, counting and delineation of urban elements according to legal technical specifications and quality standards. Therefore, the nature of this data source and expanding range of applications calls for objective methods and quantitative metrics to assess the quality of the extracted information which go beyond traditional thematic accuracy alone. The present work concerns the development and testing of a new approach for using technical mapping standards in the quality assessment of buildings automatically extracted from VHR satellite imagery. Feature extraction software was employed to map buildings present in a pansharpened QuickBird image of Lisbon. Quality assessment was exhaustive and involved comparisons of extracted features against a reference data set, introducing cartographic constraints from scales 1:1000, 1:5000, and 1:10,000. The spatial data quality elements subject to evaluation were: thematic (attribute) accuracy, completeness, and geometric quality assessed based on planimetric deviation from the reference map. Tests were developed and metrics analyzed considering thresholds and standards for the large mapping scales most frequently used by municipalities. Results show that values for completeness varied with mapping scales and were only slightly superior for scale 1:10,000. Concerning the geometric quality, a large percentage of extracted features met the strict topographic standards of planimetric deviation for scale 1:10,000, while no buildings were compliant with the specification for scale 1:1000.
Resumo:
An increasing number of m-Health applications are being developed benefiting health service delivery. In this paper, a new methodology based on the principle of calm computing applied to diagnostic and therapeutic procedure reporting is proposed. A mobile application was designed for the physicians of one of the Portuguese major hospitals, which takes advantage of a multi-agent interoperability platform, the Agency for the Integration, Diffusion and Archive (AIDA). This application allows the visualization of inpatients and outpatients medical reports in a quicker and safer manner, in addition to offer a remote access to information. This project shows the advantages in the use of mobile software in a medical environment but the first step is always to build or use an interoperability platform, flexible, adaptable and pervasive. The platform offers a comprehensive set of services that restricts the development of mobile software almost exclusively to the mobile user interface design. The technology was tested and assessed in a real context by intensivists.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2010
Resumo:
Quality management has become a strategic issue for organisations and is very valuable to produce quality software. However, quality management systems (QMS) are not easy to implement and maintain. The authors' experience shows the benefits of developing a QMS by first formalising it using semantic web ontologies and then putting them into practice through a semantic wiki. The QMS ontology that has been developed captures the core concepts of a traditional QMS and combines them with concepts coming from the MPIu'a development process model, which is geared towards obtaining usable and accessible software products. Then, the ontology semantics is directly put into play by a semantics-aware tool, the Semantic MediaWiki. The developed QMS tool is being used for 2 years by the GRIHO research group, where it has manages almost 50 software development projects taking into account the quality management issues. It has also been externally audited by a quality certification organisation. Its users are very satisfied with their daily work with the tool, which manages all the documents created during project development and also allows them to collaborate, thanks to the wiki features.
Resumo:
Ohjelmistojen tärkeys nykypäivän yhteiskunnalle kasvaa jatkuvasti. Monia ohjelmistoprojekteja vaivaavat ongelmat aikataulussa pysymisestä, korkean tuottavuuden ylläpitämisestä ja riittävän korkeasta laadusta. Ohjelmistokehitysprosessien parantamisessa on naiden ongelmien minimoimiseksi tehty suuria investointeja. Investointien syynä on ollut olettamus ohjelmistokehityksen kapasiteetin suora riippuvuus tuotteen laadusta. Tämän tutkimuksen tarkoituksena oli tutkia Ohjelmistokehitysprosessien parantamisen mahdollisuuksia. Olemassaolevat ohjelmistokehityksen ja Ohjelmistokehitysprosessin parantamisen mallit, tekniikat ja metodologiat esiteltiin. Esiteltyjen mallien, tekniikoiden ja metodologioiden soveltuvuus analysoitiin ja suositus mallien käytöstä annettiin.
Resumo:
Large enterprises have for many years employed eBusiness solutions in order to improve their efficiency. Smaller companies, however, have not been able to leverage these technologies due to the high level of know-how and resources required in implementing them. To solve this, novel software services are being developed to facilitate eBusiness adoption for the small enterprise with the aim of making B2Bi feasible not only between large organisations but also between trading partners of all sizes. The objective of this study was to find what standards and techniques on eBusiness and software testing and quality assurance fit best for building these new kinds of software considering the requirements their unique eBusiness approach poses. The research was conducted as a literature study with focus on standards on software testing and quality assurance together with standards on eBusiness. The study showed that the current software testing and quality assurance standards do not possess such characteristics as would make select standards evidently better fitted for building this type of software, which were established to be best developed as web services in order for them to meet their requirements. A selection of eBusiness standards and technologies was proposed to support this approach. The main finding in the study was, however, that these kinds of web services that have high interoperability requirements will have to be able to carry out automated interoperability and conformance testing as part of their operation; this objective dictates how the software are built and how testing during software development is to be done. The study showed that research on automated interoperability and conformance testing for web services is still limited and more research is needed to make the building of highly-interoperable web services more feasible.
Resumo:
Dagens programvaruindustri står inför alltmer komplicerade utmaningar i en värld där programvara är nästan allstädes närvarande i våra dagliga liv. Konsumenten vill ha produkter som är pålitliga, innovativa och rika i funktionalitet, men samtidigt också förmånliga. Utmaningen för oss inom IT-industrin är att skapa mer komplexa, innovativa lösningar till en lägre kostnad. Detta är en av orsakerna till att processförbättring som forskningsområde inte har minskat i betydelse. IT-proffs ställer sig frågan: “Hur håller vi våra löften till våra kunder, samtidigt som vi minimerar vår risk och ökar vår kvalitet och produktivitet?” Inom processförbättringsområdet finns det olika tillvägagångssätt. Traditionella processförbättringsmetoder för programvara som CMMI och SPICE fokuserar på kvalitets- och riskaspekten hos förbättringsprocessen. Mer lättviktiga metoder som t.ex. lättrörliga metoder (agile methods) och Lean-metoder fokuserar på att hålla löften och förbättra produktiviteten genom att minimera slöseri inom utvecklingsprocessen. Forskningen som presenteras i denna avhandling utfördes med ett specifikt mål framför ögonen: att förbättra kostnadseffektiviteten i arbetsmetoderna utan att kompromissa med kvaliteten. Den utmaningen attackerades från tre olika vinklar. För det första förbättras arbetsmetoderna genom att man introducerar lättrörliga metoder. För det andra bibehålls kvaliteten genom att man använder mätmetoder på produktnivå. För det tredje förbättras kunskapsspridningen inom stora företag genom metoder som sätter samarbete i centrum. Rörelsen bakom lättrörliga arbetsmetoder växte fram under 90-talet som en reaktion på de orealistiska krav som den tidigare förhärskande vattenfallsmetoden ställde på IT-branschen. Programutveckling är en kreativ process och skiljer sig från annan industri i det att den största delen av det dagliga arbetet går ut på att skapa något nytt som inte har funnits tidigare. Varje programutvecklare måste vara expert på sitt område och använder en stor del av sin arbetsdag till att skapa lösningar på problem som hon aldrig tidigare har löst. Trots att detta har varit ett välkänt faktum redan i många decennier, styrs ändå många programvaruprojekt som om de vore produktionslinjer i fabriker. Ett av målen för rörelsen bakom lättrörliga metoder är att lyfta fram just denna diskrepans mellan programutvecklingens innersta natur och sättet på vilket programvaruprojekt styrs. Lättrörliga arbetsmetoder har visat sig fungera väl i de sammanhang de skapades för, dvs. små, samlokaliserade team som jobbar i nära samarbete med en engagerad kund. I andra sammanhang, och speciellt i stora, geografiskt utspridda företag, är det mera utmanande att införa lättrörliga metoder. Vi har nalkats utmaningen genom att införa lättrörliga metoder med hjälp av pilotprojekt. Detta har två klara fördelar. För det första kan man inkrementellt samla kunskap om metoderna och deras samverkan med sammanhanget i fråga. På så sätt kan man lättare utveckla och anpassa metoderna till de specifika krav som sammanhanget ställer. För det andra kan man lättare överbrygga motstånd mot förändring genom att introducera kulturella förändringar varsamt och genom att målgruppen får direkt förstahandskontakt med de nya metoderna. Relevanta mätmetoder för produkter kan hjälpa programvaruutvecklingsteam att förbättra sina arbetsmetoder. När det gäller team som jobbar med lättrörliga och Lean-metoder kan en bra uppsättning mätmetoder vara avgörande för beslutsfattandet när man prioriterar listan över uppgifter som ska göras. Vårt fokus har legat på att stöda lättrörliga och Lean-team med interna produktmätmetoder för beslutsstöd gällande så kallad omfaktorering, dvs. kontinuerlig kvalitetsförbättring av programmets kod och design. Det kan vara svårt att ta ett beslut att omfaktorera, speciellt för lättrörliga och Lean-team, eftersom de förväntas kunna rättfärdiga sina prioriteter i termer av affärsvärde. Vi föreslår ett sätt att mäta designkvaliteten hos system som har utvecklats med hjälp av det så kallade modelldrivna paradigmet. Vi konstruerar även ett sätt att integrera denna mätmetod i lättrörliga och Lean-arbetsmetoder. En viktig del av alla processförbättringsinitiativ är att sprida kunskap om den nya programvaruprocessen. Detta gäller oavsett hurdan process man försöker introducera – vare sig processen är plandriven eller lättrörlig. Vi föreslår att metoder som baserar sig på samarbete när processen skapas och vidareutvecklas är ett bra sätt att stöda kunskapsspridning på. Vi ger en översikt över författarverktyg för processer på marknaden med det förslaget i åtanke.