972 resultados para Distribuito, Modello, Attori, Software, quality, Scalabilità
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013).Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Politécnica del Ejército Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program andgroup variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults.We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
La usabilidad es un atributo de la calidad del software que se encuentra en la mayoría de las clasificaciones. En este contexto, se entiende por usabilidad la medida en que un producto satisface las necesidades de los stakeholders para alcanzar determinados objetivos de eficacia, eficiencia y satisfacción sin efectos adversos en un contexto específico de uso. En este contexto, no es de extrañar que la usabilidad sea cada vez más reconocida como uno de los factores críticos para la aceptación de un sistema software. En la literatura se encuentran recomendaciones para mejorar la usabilidad de los sistemas software. Sin embargo, no existen datos empíricos que demuestren cómo estas recomendaciones contribuyen positiva o negativamente a cada atributo de usabilidad. En este contexto, el objetivo de la tesis doctoral es la obtención de evidencias empíricas sobre el impacto de la inclusión de mecanismos de usabilidad en un sistema software, más concretamente en los atributos de eficacia, eficiencia y satisfacción-- En esta tesis doctoral se trabaja con mecanismos de usabilidad con gran impacto en la arquitectura software, ya que los mismos deben ser especialmente considerados durante el proceso de desarrollo software. Para obtener evidencias empíricas sobre la inclusión de dichos mecanismos, se ha realizado un experimento piloto y 4 experimentos definitivos utilizando aplicaciones software especialmente desarrolladas para tal fin. Una vez realizada la contrastación empírica de la influencia de estos mecanismos para diferentes sistemas software se pueden realizar las siguientes recomendaciones generales: -La inclusión de cada uno de los mecanismos considerados aumenta considerablemente la satisfacción del usuario en su interacción con las aplicaciones software. -La inclusión de aquellos mecanismos que introducen cuadros de interacción adicionales, disminuye la eficiencia de los usuarios, ya que aumenta el tiempo de duración de las tareas. -Existen una serie de mecanismos que como consecuencia de su ausencia en las aplicaciones resulta imposible la realización de las tareas asociadas a los mismos. Dentro de éste grupo de mecanismos se encuentran aquellos que permiten cancelar o deshacer acciones previamente tomadas y aquellos que provean ayuda en la introducción de información en formularios. Esto implica que su presencia permite la realización exitosa de las tareas y como consecuencia la interacción es mucho más eficaz. En definitiva, se ha encontrado que los distintos mecanismos de usabilidad tienen un impacto diferente en cada atributo, dependiendo su inclusión del/los atributos a potenciar en cada proyecto. Usability is an aspect of software quality that´s found in most classifications. In this context usability is understood as the measure in which a product satisfies the needs of the stakeholders in order to reach certain effectiveness, efficiency and satisfaction goals without adverse sideeffects in a specific context of use. It´s no wonder usability is increasingly being recognized as one of the critical factors for the acceptance of a software system. Recommendations are found in literature to improve the usability of software systems. Nevertheless, there is no empirical data that shows how these recommendations contribute positively or negatively to each usability aspect. In this context, the objective of this doctoral thesis is to obtain empirical evidence on the impact of the inclusion of usability mechanisms in a software system, specifically in the attributes of effectiveness, efficiency and satisfaction. This doctoral thesis deals with usability mechanisms of great impact in software architecture, since they have to be considered during the software development process. In order to get empirical evidence on the inclusion of such mechanisms, a pilot experiment and four definitive experiments have been performed through software applications especially developed for this end. Once the empirical assessment of the influence of these mechanisms on different software systems has been performed, the following general recommendations can be made: -The inclusion of each of the considered mechanisms considerably increases the user´s satisfaction in his interaction with software applications. -The inclusion of those mechanisms that introduce additional interaction frames diminishes the efficiency of the users, since it increases task duration time. -As a consequence of being absent in applications, there is a series of mechanisms whose associated tasks are impossible to perform. Within this group we find those mechanisms that allow cancelling or undoing previous actions, and those that provide help with the introduction of information in forms. This implies that their presence allows the successful completion of tasks, and as a consequence, the interaction is much more effective. In summary, it has been found that the diverse usability mechanisms has a different impact on each attribute, and their inclusion depends on the attribute or attributes to enhance in each project.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Polite?cnica del Eje?rcito Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
Contexto: La presente tesis doctoral se enmarca en la actividad de educción de los requisitos. La educción de requisitos es generalmente aceptada como una de las actividades más importantes dentro del proceso de Ingeniería de Requisitos, y tiene un impacto directo en la calidad del software. Es una actividad donde la comunicación entre los involucrados (analistas, clientes, usuarios) es primordial. La efectividad y eficacia del analista en la compresión de las necesidades de clientes y usuarios es un factor crítico para el éxito del desarrollo de software. La literatura se ha centrado principalmente en estudiar y comprender un conjunto específico de capacidades o habilidades personales que debe poseer el analista para realizar de forma efectiva la actividad de educción. Sin embargo, existen muy pocos trabajos que han estudiado dichas capacidades o habilidades empíricamente. Objetivo: La presente investigación tiene por objetivo estudiar el efecto de la experiencia, el conocimiento acerca del dominio y la titulación académica que poseen los analistas en la efectividad del proceso de educción de los requisitos, durante los primeros contactos del analista con el cliente. Método de Investigación: Hemos ejecutado 8 estudios empíricos entre cuasi-experimentos (4) y experimentos controlados (4). Un total de 110 sujetos experimentales han participado en los estudios, entre estudiantes de post-grado de la Escuela Técnica Superior de Ingenieros Informáticos de la Universidad Politécnica de Madrid y profesionales. La tarea experimental consistió en realizar sesiones de educción de requisitos sobre uno o más dominios de problemas (de carácter conocido y desconocido para los sujetos). Las sesiones de educción se realizaron empleando la entrevista abierta. Finalizada la entrevista, los sujetos reportaron por escrito toda la información adquirida. Resultados: Para dominios desconocidos, la experiencia (entrevistas, requisitos, desarrollo y profesional) del analista no influye en su efectividad. En dominios conocidos, la experiencia en entrevistas (r = 0.34, p-valor = 0.080) y la experiencia en requisitos (r = 0.22, p-valor = 0.279), ejercen un efecto positivo. Esto es, los analistas con más años de experiencia en entrevistas y/o requisitos tienden a alcanzar mejores efectividades. Por el contrario, la experiencia en desarrollo (r = -0.06, p-valor = 0.765) y la experiencia profesional (r = -0.35, p-valor = 0.077), tienden a ejercer un efecto nulo y negativo, respectivamente. En lo que respecta al conocimiento acerca del dominio del problema que poseen los analistas, ejerce un moderado efecto positivo (r=0.31), estadísticamente significativo (p-valor = 0.029) en la efectividad de la actividad de educción. Esto es, los analistas con conocimiento tienden a ser más efectivos en los dominios de problema conocidos. En lo que respecta a la titulación académica, por falta de diversidad en las titulaciones académicas de los sujetos experimentales no es posible alcanzar una conclusión. Hemos podido explorar el efecto de la titulación académica en sólo dos cuasi-experimentos, sin embargo, nuestros resultados arrojan efectos contradictorios (r = 0.694, p-valor = 0.51 y r = -0.266, p-valor = 0.383). Además de las variables estudiadas indicadas anteriormente, hemos confirmado la existencia de variables moderadoras que afectan a la actividad de educción, tales como el entrevistado o la formación. Nuestros datos experimentales confirman que el entrevistado es un factor clave en la actividad de educción. Estadísticamente ejerce una influencia significativa en la efectividad de los analistas (p-valor= 0.000). La diferencia entre entrevistar a uno u otro entrevistado, en unidades naturales, varía entre un 18% - 23% en efectividad. Por otro lado, la formación en requisitos aumenta considerablemente la efectividad de los analistas. Los sujetos que realizaron la educción de requisitos después de recibir una formación específica en requisitos tienden a ser entre un 12% y 20% más efectivos que aquellos que no la recibieron. El efecto es significativo (p-valor = 0.000). Finalmente, hemos observado tres hechos que podrían influir en los resultados de esta investigación. En primer lugar, la efectividad de los analistas es diferencial dependiendo del tipo de elemento del dominio. En dominios conocidos, los analistas con experiencia tienden a adquirir más conceptos que los analistas noveles. En los dominios desconocidos, son los procesos los que se adquieren de forma prominente. En segundo lugar, los analistas llegan a una especie de “techo de cristal” que no les permite adquirir más información. Es decir, el analista sólo reconoce (parte de) los elementos del dominio del problema mencionado. Este hecho se observa tanto en el dominio de problema desconocido como en el conocido, y parece estar relacionado con el modo en que los analistas exploran el dominio del problema. En tercer lugar, aunque los años de experiencia no parecen predecir cuán efectivo será un analista, sí parecen asegurar que un analista con cierta experiencia, en general, tendrá una efectividad mínima que será superior a la efectividad mínima de los analistas con menos experiencia. Conclusiones: Los resultados obtenidos muestran que en dominios desconocidos, la experiencia por sí misma no determina la efectividad de los analistas de requisitos. En dominios conocidos, la efectividad de los analistas se ve influenciada por su experiencia en entrevistas y requisitos, aunque sólo parcialmente. Otras variables influyen en la efectividad de los analistas, como podrían ser las habilidades débiles. El conocimiento del dominio del problema por parte del analista ejerce un efecto positivo en la efectividad de los analistas, e interacciona positivamente con la experiencia incrementando aún más la efectividad de los analistas. Si bien no fue posible obtener conclusiones sólidas respecto al efecto de la titulación académica, si parece claro que la formación específica en requisitos ejerce una importante influencia positiva en la efectividad de los analistas. Finalmente, el analista no es el único factor relevante en la actividad de educción. Los clientes/usuarios (entrevistados) también juegan un rol importante en el proceso de generación de información. ABSTRACT Context: This PhD dissertation addresses requirements elicitation activity. Requirements elicitation is generally acknowledged as one of the most important activities of the requirements process, having a direct impact in the software quality. It is an activity where the communication among stakeholders (analysts, customers, users) is paramount. The analyst’s ability to effectively understand customers/users’ needs represents a critical factor for the success of software development. The literature has focused on studying and comprehending a specific set of personal skills that the analyst must have to perform requirements elicitation effectively. However, few studies have explored those skills from an empirical viewpoint. Goal: This research aims to study the effects of experience, domain knowledge and academic qualifications on the analysts’ effectiveness when performing requirements elicitation, during the first stages of analyst-customer interaction. Research method: We have conducted eight empirical studies, quasi-experiments (four) and controlled experiments (four). 110 experimental subjects participated, including: graduate students with the Escuela Técnica Superior de Ingenieros Informáticos of the Universidad Politécnica de Madrid, as well as researchers and professionals. The experimental tasks consisted in elicitation sessions about one or several problem domains (ignorant and/or aware for the subjects). Elicitation sessions were conducted using unstructured interviews. After each interview, the subjects reported in written all collected information. Results: In ignorant domains, the analyst’s experience (interviews, requirements, development and professional) does not influence her effectiveness. In aware domains, interviewing experience (r = 0.34, p-value = 0.080) and requirements experience (r = 0.22, p-value = 0.279), make a positive effect, i.e.: the analysts with more years of interviewing/requirements experience tend to achieve higher effectiveness. On the other hand, development experience (r = -0.06, p-value = 0.765) and professional experience (r = -0.35, p-value = 0.077) tend to make a null and negative effect, respectively. On what regards the analyst’s problem domain knowledge, it makes a modest positive effect (r=0.31), statistically significant (p-value = 0.029) on the effectiveness of the elicitation activity, i.e.: the analysts with tend to be more effective in problem domains they are aware of. On what regards academic qualification, due to the lack of diversity in the subjects’ academic degrees, we cannot come to a conclusion. We have been able to explore the effect of academic qualifications in two experiments; however, our results show opposed effects (r = 0.694, p-value = 0.51 y r = -0.266, p-value = 0.383). Besides the variables mentioned above, we have confirmed the existence of moderator variables influencing the elicitation activity, such as the interviewee and the training. Our data confirm that the interviewee is a key factor in the elicitation activity; it makes statistically significant effect on analysts’ effectiveness (p-value = 0.000). Interviewing one or another interviewee represents a difference in effectiveness of 18% - 23%, in natural units. On the other hand, requirements training increases to a large extent the analysts’ effectiveness. Those subjects who performed requirements elicitation after specific training tend to be 12% - 20% more effective than those who did not receive training. The effect is statistically significant (p-value = 0.000). Finally, we have observed three phenomena that could have an influence on the results of this research. First, the analysts’ effectiveness differs depending on domain element types. In aware domains, experienced analysts tend to capture more concepts than novices. In ignorant domains, processes are identified more frequently. Second, analysts get to a “glass ceiling” that prevents them to acquire more information, i.e.: analysts only identify (part of) the elements of the problem domain. This fact can be observed in both the ignorant and aware domains. Third, experience years do not look like a good predictor of how effective an analyst will be; however, they seem to guarantee that an analyst with some experience years will have a higher minimum effectiveness than the minimum effectiveness of analysts with fewer experience years. Conclusions: Our results point out that experience alone does not explain analysts’ effectiveness in ignorant domains. In aware domains, analysts’ effectiveness is influenced the experience in interviews and requirements, albeit partially. Other variables influence analysts’ effectiveness, e.g.: soft skills. The analysts’ problem domain knowledge makes a positive effect in analysts’ effectiveness; it positively interacts with the experience, increasing even further analysts’ effectiveness. Although we could not obtain solid conclusions on the effect of the academic qualifications, it is plain clear that specific requirements training makes a rather positive effect on analysts’ effectiveness. Finally, the analyst is not the only relevant factor in the elicitation activity. The customers/users (interviewees) play also an important role in the information generation process.
Resumo:
There is growing interest in the use of context-awareness as a technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of users. However, context-awareness introduces a variety of software engineering challenges. In this paper, we address these challenges by proposing a set of conceptual models designed to support the software engineering process, including context modelling techniques, a preference model for representing context-dependent requirements, and two programming models. We also present a software infrastructure and software engineering process that can be used in conjunction with our models. Finally, we discuss a case study that demonstrates the strengths of our models and software engineering approach with respect to a set of software quality metrics.
Resumo:
As a part of the activities of the first Symposium on Process Improvement Models and Software Quality of the Spanish Public Administration, working groups were formed to discuss the current state of the Requirements Management and Supplier Agreement Management processes. This article presents general results and main contributions of those working groups. The results have allowed the obtention of a preliminary appraisal of the current state of these two processes in the Spanish Public Administration.
Resumo:
The long-term foetal surveillance is often to be recommended. Hence, the fully non-invasive acoustic recording, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the recorded heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. In this paper, we present a new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings. A filtering is employed as a first step of the algorithm to reduce the background noise. A block for first heart sounds enhancing is then used to further reduce other components of foetal heart sound signals. A complex logic block, guided by a number of rules concerning foetal heart beat regularity, is proposed as a successive block, for the detection of most probable first heart sounds from several candidates. A final block is used for exact first heart sound timing and in turn foetal heart rate estimation. Filtering and enhancing blocks are actually implemented by means of different techniques, so that different processing paths are proposed. Furthermore, a reliability index is introduced to quantify the consistency of the estimated foetal heart rate and, based on statistic parameters; [,] a software quality index is designed to indicate the most reliable analysis procedure (that is, combining the best processing path and the most accurate time mark of the first heart sound, provides the lowest estimation errors). The algorithm performances have been tested on phonocardiographic signals recorded in a local gynaecology private practice from a sample group of about 50 pregnant women. Phonocardiographic signals have been recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by our algorithm and the other provided by cardiotocographic device). Our results show that the proposed algorithm, in particular some analysis procedures, provides reliable foetal heart rate signals, very close to the reference cardiotocographic recordings. © 2010 Elsevier Ltd. All rights reserved.
Resumo:
Software bug analysis is one of the most important activities in Software Quality. The rapid and correct implementation of the necessary repair influence both developers, who must leave the fully functioning software, and users, who need to perform their daily tasks. In this context, if there is an incorrect classification of bugs, there may be unwanted situations. One of the main factors to be assigned bugs in the act of its initial report is severity, which lives up to the urgency of correcting that problem. In this scenario, we identified in datasets with data extracted from five open source systems (Apache, Eclipse, Kernel, Mozilla and Open Office), that there is an irregular distribution of bugs with respect to existing severities, which is an early sign of misclassification. In the dataset analyzed, exists a rate of about 85% bugs being ranked with normal severity. Therefore, this classification rate can have a negative influence on software development context, where the misclassified bug can be allocated to a developer with little experience to solve it and thus the correction of the same may take longer, or even generate a incorrect implementation. Several studies in the literature have disregarded the normal bugs, working only with the portion of bugs considered severe or not severe initially. This work aimed to investigate this portion of the data, with the purpose of identifying whether the normal severity reflects the real impact and urgency, to investigate if there are bugs (initially classified as normal) that could be classified with other severity, and to assess if there are impacts for developers in this context. For this, an automatic classifier was developed, which was based on three algorithms (Näive Bayes, Max Ent and Winnow) to assess if normal severity is correct for the bugs categorized initially with this severity. The algorithms presented accuracy of about 80%, and showed that between 21% and 36% of the bugs should have been classified differently (depending on the algorithm), which represents somewhere between 70,000 and 130,000 bugs of the dataset.
Resumo:
Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps. The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size. Sammanfattning: Ensidesapplikationer har historiskt sett påverkats av starka marknadskrafter som pådriver snabba utvecklingscykler och leveranser. Detta medför att kvalitetskontroll och förändringsbar kod, som är viktiga faktorer för förvaltningsbarhet, blir lidande. I denna rapport utvecklar vi två funktionellt ekvi-valenta ensidesapplikationer med AngularJS och React samt jämför dessa applikationers förvaltningsbarhet enligt ISO/IEC 9126. AngularJS och React representerar två distinkta angreppsätt på webbutveckling, där AngularJS är ett ramverk med mycket färdig funktionalitet och React ett mindre bibliotek specialiserat på vyrendering. Kvalitetsjämförelsen utfördes genom att beräkna förvaltningsbarhetsindex för respektive applikation. Versionshanteringsanalys användes för att bestämma andra kvalitetsindikatorer efter den initiala utvecklingen samt två efterföljande underhållsarbeten. Resultaten visar inga markanta skillnader i förvaltningsbarhet för de initiala applikationerna. I takt med att mer funktionalitet lades till sjönk förvaltnings-barhetsindex snabbare för AngularJS-applikationen, vilket motsvarar en kraftigare ökning i komplexitet jämfört med React-applikationen. Versionshanteringsanalys visar att ändringar i dataflödet kräver större modifikationer för AngularJS-applikationen på grund av dess förbestämda arkitektur. Utifrån detta drar vi slutsatsen att ramverk är användbara när de understödjer utvecklingen mot kända krav men att deras nytta blir begränsad ju mer en applikation växer i storlek.
Resumo:
The life cycle of software applications in general is very short and with extreme volatile requirements. Within these conditions programmers need development tools and techniques with an extreme level of productivity. We consider the code reuse as the most prominent approach to solve that problem. Our proposal uses the advantages provided by the Aspect-Oriented Programming in order to build a reusable framework capable to turn both programmer and application oblivious as far as data persistence is concerned, thus avoiding the need to write any line of code about that concern. Besides the benefits to productivity, the software quality increases. This paper describes the actual state of the art, identifying the main challenge to build a complete and reusable framework for Orthogonal Persistence in concurrent environments with support for transactions. The present work also includes a successfully developed prototype of that framework, capable of freeing the programmer of implementing any read or write data operations. This prototype is supported by an object oriented database and, in the future, will also use a relational database and have support for transactions.
Resumo:
Negli ultimi anni le Web application stanno assumendo un ruolo sempre più importante nella vita di ognuno di noi. Se fino a qualche anno fa eravamo abituati ad utilizzare quasi solamente delle applicazioni “native”, che venivano eseguite completamente all’interno del nostro Personal Computer, oggi invece molti utenti utilizzano i loro vari dispositivi quasi esclusivamente per accedere a delle Web application. Grazie alle applicazioni Web si sono potuti creare i cosiddetti social network come Facebook, che sta avendo un enorme successo in tutto il mondo ed ha rivoluzionato il modo di comunicare di molte persone. Inoltre molte applicazioni più tradizionali come le suite per ufficio, sono state trasformate in applicazioni Web come Google Docs, che aggiungono per esempio la possibilità di far lavorare più persone contemporanemente sullo stesso documento. Le Web applications stanno assumendo quindi un ruolo sempre più importante, e di conseguenza sta diventando fondamentale poter creare delle applicazioni Web in grado di poter competere con le applicazioni native, che siano quindi in grado di svolgere tutti i compiti che sono stati sempre tradizionalmente svolti dai computer. In questa Tesi ci proporremo quindi di analizzare le varie possibilità con le quali poter migliorare le applicazioni Web, sia dal punto di vista delle funzioni che esse possono svolgere, sia dal punto di vista della scalabilità. Dato che le applicazioni Web moderne hanno sempre di più la necessità di poter svolgere calcoli in modo concorrente e distribuito, analizzeremo un modello computazionale che si presta particolarmente per progettare questo tipo di software: il modello ad Attori. Vedremo poi, come caso di studio di framework per la realizzazione di applicazioni Web avanzate, il Play framework: esso si basa sulla piattaforma Akka di programmazione ad Attori, e permette di realizzare in modo semplice applicazioni Web estremamente potenti e scalabili. Dato che le Web application moderne devono avere già dalla nascita certi requisiti di scalabilità e fault tolerance, affronteremo il problema di come realizzare applicazioni Web predisposte per essere eseguite su piattaforme di Cloud Computing. In particolare vedremo come pubblicare una applicazione Web basata sul Play framework sulla piattaforma Heroku, un servizio di Cloud Computing PaaS.
Resumo:
La tesi si propone, dopo una panoramica generale sullo stato dell'arte in termini di software e concorrenza, l'obiettivo di studiare il modello ad attori ed analizzarne alcune significative implementazioni. Scelta quindi una specifica implementazione come caso di studio, in particolare il linguaggio AXUM in ambiente .NET, si entrerà nel dettaglio di tale linguaggio, analizzandone tutti gli aspetti e valutandone le potenzialità. Infine, sarà affrontata una breve ma significativa analisi critica sul linguaggio scelto.
Resumo:
An increasing number of m-Health applications are being developed benefiting health service delivery. In this paper, a new methodology based on the principle of calm computing applied to diagnostic and therapeutic procedure reporting is proposed. A mobile application was designed for the physicians of one of the Portuguese major hospitals, which takes advantage of a multi-agent interoperability platform, the Agency for the Integration, Diffusion and Archive (AIDA). This application allows the visualization of inpatients and outpatients medical reports in a quicker and safer manner, in addition to offer a remote access to information. This project shows the advantages in the use of mobile software in a medical environment but the first step is always to build or use an interoperability platform, flexible, adaptable and pervasive. The platform offers a comprehensive set of services that restricts the development of mobile software almost exclusively to the mobile user interface design. The technology was tested and assessed in a real context by intensivists.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2010
Resumo:
Quality management has become a strategic issue for organisations and is very valuable to produce quality software. However, quality management systems (QMS) are not easy to implement and maintain. The authors' experience shows the benefits of developing a QMS by first formalising it using semantic web ontologies and then putting them into practice through a semantic wiki. The QMS ontology that has been developed captures the core concepts of a traditional QMS and combines them with concepts coming from the MPIu'a development process model, which is geared towards obtaining usable and accessible software products. Then, the ontology semantics is directly put into play by a semantics-aware tool, the Semantic MediaWiki. The developed QMS tool is being used for 2 years by the GRIHO research group, where it has manages almost 50 software development projects taking into account the quality management issues. It has also been externally audited by a quality certification organisation. Its users are very satisfied with their daily work with the tool, which manages all the documents created during project development and also allows them to collaborate, thanks to the wiki features.