12 resultados para customization
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
This Master s thesis presents a discussion on customer satisfaction models investigating the relations of antecedent variables service quality, price index, complaint handling, image, affective and calculative commitment, with satisfaction and loyalty. The scope of the research is the influence of service dimensions in the car buyer s satisfaction and loyalty. A sample of 91 customers was surveyed among new cars buyers of one brand in Natal city, Brazil, and the data was analyzed using multiple regression analysis. The literature review covers subjects such as customer satisfaction, management system, customer satisfaction measurement index models. The main findings suggest that satisfaction with the car brand is mainly influenced by customization of the service, time for accomplishing servicing, and the way the dealer handle complains. Regarding the dealer itself the main variable related to satisfaction is also time for accomplishing servicing. Considering customer loyalty, the customer satisfaction with the dealer explain strongly the loyalty with the brand/manufacturer. Also, the satisfaction, affective commitment and complains handling were found related to loyalty, as the stronger variables explaining the loyalty variance. One main conclusion is that service provided by dealers is one key factor influencing the customer satisfaction and loyalty in auto industry
Resumo:
Model-oriented strategies have been used to facilitate products customization in the software products lines (SPL) context and to generate the source code of these derived products through variability management. Most of these strategies use an UML (Unified Modeling Language)-based model specification. Despite its wide application, the UML-based model specification has some limitations such as the fact that it is essentially graphic, presents deficiencies regarding the precise description of the system architecture semantic representation, and generates a large model, thus hampering the visualization and comprehension of the system elements. In contrast, architecture description languages (ADLs) provide graphic and textual support for the structural representation of architectural elements, their constraints and interactions. This thesis introduces ArchSPL-MDD, a model-driven strategy in which models are specified and configured by using the LightPL-ACME ADL. Such strategy is associated to a generic process with systematic activities that enable to automatically generate customized source code from the product model. ArchSPLMDD strategy integrates aspect-oriented software development (AOSD), modeldriven development (MDD) and SPL, thus enabling the explicit modeling as well as the modularization of variabilities and crosscutting concerns. The process is instantiated by the ArchSPL-MDD tool, which supports the specification of domain models (the focus of the development) in LightPL-ACME. The ArchSPL-MDD uses the Ginga Digital TV middleware as case study. In order to evaluate the efficiency, applicability, expressiveness, and complexity of the ArchSPL-MDD strategy, a controlled experiment was carried out in order to evaluate and compare the ArchSPL-MDD tool with the GingaForAll tool, which instantiates the process that is part of the GingaForAll UML-based strategy. Both tools were used for configuring the products of Ginga SPL and generating the product source code
Resumo:
Middleware platforms have been widely used as an underlying infrastructure to the development of distributed applications. They provide distribution and heterogeneity transparency and a set of services that ease the construction of distributed applications. Nowadays, the middlewares accommodate an increasing variety of requirements to satisfy distinct application domains. This broad range of application requirements increases the complexity of the middleware, due to the introduction of many cross-cutting concerns in the architecture, which are not properly modularized by traditional programming techniques, resulting in a tangling and spread of theses concerns in the middleware code. The presence of these cross-cutting concerns limits the middleware scalability and aspect-oriented paradigm has been used successfully to improve the modularity, extensibility and customization capabilities of middleware. This work presents AO-OiL, an aspect-oriented (AO) middleware architecture, based on the AO middleware reference architecture. This middleware follows the philosophy that the middleware functionalities must be driven by the application requirements. AO-OiL consists in an AO refactoring of the OiL (Orb in Lua) middleware in order to separate basic and crosscutting concerns. The proposed architecture was implemented in Lua and RE-AspectLua. To evaluate the refactoring impact in the middleware architecture, this paper presents a comparative analysis of performance between AO-OiL and OiL
Resumo:
Many challenges have been imposed on the middleware to support applications for digital TV because of the heterogeneity and resource constraints of execution platforms. In this scenario, the middleware must be highly configurable so that it can be customized to meet the requirements of applications and underlying platforms. This work aims to present the GingaForAll, a software product line developed for the Ginga - the middleware of the Brazilian Digital TV (SBTVD). GingaForAll adds the concepts of software product line, aspect orientation and model-driven development to allow: (i) the specification of the common characteristics and variables of the middleware, (ii) the modularization of crosscutting concerns - both mandatory and concepts variables - through aspects, (iii) the expression of concepts as a set of models that increase the level of abstraction and enables management of various software artifacts in terms of configurable models. This work presents the architecture of the software product line that implements such a tool and architecture that supports automatic customization of middleware. The work also presents a tool that implements the process of generating products GingaForAll
Resumo:
On the last years, several middleware platforms for Wireless Sensor Networks (WSN) were proposed. Most of these platforms does not consider issues of how integrate components from generic middleware architectures. Many requirements need to be considered in a middleware design for WSN and the design, in this case, it is possibility to modify the source code of the middleware without changing the external behavior of the middleware. Thus, it is desired that there is a middleware generic architecture that is able to offer an optimal configuration according to the requirements of the application. The adoption of middleware based in component model consists of a promising approach because it allows a better abstraction, low coupling, modularization and management features built-in middleware. Another problem present in current middleware consists of treatment of interoperability with external networks to sensor networks, such as Web. Most current middleware lacks the functionality to access the data provided by the WSN via the World Wide Web in order to treat these data as Web resources, and they can be accessed through protocols already adopted the World Wide Web. Thus, this work presents the Midgard, a component-based middleware specifically designed for WSNs, which adopts the architectural patterns microkernel and REST. The microkernel architectural complements the component model, since microkernel can be understood as a component that encapsulates the core system and it is responsible for initializing the core services only when needed, as well as remove them when are no more needed. Already REST defines a standardized way of communication between different applications based on standards adopted by the Web and enables him to treat WSN data as web resources, allowing them to be accessed through protocol already adopted in the World Wide Web. The main goals of Midgard are: (i) to provide easy Web access to data generated by WSN, exposing such data as Web resources, following the principles of Web of Things paradigm and (ii) to provide WSN application developer with capabilities to instantiate only specific services required by the application, thus generating a customized middleware and saving node resources. The Midgard allows use the WSN as Web resources and still provide a cohesive and weakly coupled software architecture, addressing interoperability and customization. In addition, Midgard provides two services needed for most WSN applications: (i) configuration and (ii) inspection and adaptation services. New services can be implemented by others and easily incorporated into the middleware, because of its flexible and extensible architecture. According to the assessment, the Midgard provides interoperability between the WSN and external networks, such as web, as well as between different applications within a single WSN. In addition, we assessed the memory consumption, the application image size, the size of messages exchanged in the network, and response time, overhead and scalability on Midgard. During the evaluation, the Midgard proved satisfies their goals and shown to be scalable without consuming resources prohibitively
Resumo:
This dissertation presents a model-driven and integrated approach to variability management, customization and execution of software processes. Our approach is founded on the principles and techniques of software product lines and model-driven engineering. Model-driven engineering provides support to the specification of software processes and their transformation to workflow specifications. Software product lines techniques allows the automatic variability management of process elements and fragments. Additionally, in our approach, workflow technologies enable the process execution in workflow engines. In order to evaluate the approach feasibility, we have implemented it using existing model-driven engineering technologies. The software processes are specified using Eclipse Process Framework (EPF). The automatic variability management of software processes has been implemented as an extension of an existing product derivation tool. Finally, ATL and Acceleo transformation languages are adopted to transform EPF process to jPDL workflow language specifications in order to enable the deployment and execution of software processes in the JBoss BPM workflow engine. The approach is evaluated through the modeling and modularization of the project management discipline of the Open Unified Process (OpenUP)
Resumo:
Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process
Resumo:
Software Products Lines (SPL) is a software engineering approach to developing software system families that share common features and differ in other features according to the requested software systems. The adoption of the SPL approach can promote several benefits such as cost reduction, product quality, productivity, and time to market. On the other hand, the SPL approach brings new challenges to the software evolution that must be considered. Recent research work has explored and proposed automated approaches based on code analysis and traceability techniques for change impact analysis in the context of SPL development. There are existing limitations concerning these approaches such as the customization of the analysis functionalities to address different strategies for change impact analysis, and the change impact analysis of fine-grained variability. This dissertation proposes a change impact analysis tool for SPL development, called Squid Impact Analyzer. The tool allows the implementation of change impact analysis based on information from variability modeling, mapping of variability to code assets, and existing dependency relationships between code assets. An assessment of the tool is conducted through an experiment that compare the change impact analysis results provided by the tool with real changes applied to several evolution releases from a SPL for media management in mobile devices
Resumo:
Software product line engineering promotes large software reuse by developing a system family that shares a set of developed core features, and enables the selection and customization of a set of variabilities that distinguish each software product family from the others. In order to address the time-to-market, the software industry has been using the clone-and-own technique to create and manage new software products or product lines. Despite its advantages, the clone-and-own approach brings several difficulties for the evolution and reconciliation of the software product lines, especially because of the code conflicts generated by the simultaneous evolution of the original software product line, called Source, and its cloned products, called Target. This thesis proposes an approach to evolve and reconcile cloned products based on mining software repositories and code conflict analysis techniques. The approach provides support to the identification of different kinds of code conflicts – lexical, structural and semantics – that can occur during development task integration – bug correction, enhancements and new use cases – from the original evolved software product line to the cloned product line. We have also conducted an empirical study of characterization of the code conflicts produced during the evolution and merging of two large-scale web information system product lines. The results of our study demonstrate the approach potential to automatically or semi-automatically solve several existing code conflicts thus contributing to reduce the complexity and costs of the reconciliation of cloned software product lines.
Resumo:
The software product line engineering brings advantages when compared with the traditional software development regarding the mass customization of the system components. However, there are scenarios that to maintain separated clones of a software system seems to be an easier and more flexible approach to manage their variabilities of a software product line. This dissertation evaluates qualitatively an approach that aims to support the reconciliation of functionalities between cloned systems. The analyzed approach is based on mining data about the issues and source code of evolved cloned web systems. The next step is to process the merge conflicts collected by the approach and not indicated by traditional control version systems to identify potential integration problems from the cloned software systems. The results of the study show the feasibility of the approach to perform a systematic characterization and analysis of merge conflicts for large-scale web-based systems.
Resumo:
The uncontrolled disposal of wastewaters containing phenolic compounds by the industry has caused irreversible damage to the environment. Because of this, it is now mandatory to develop new methods to treat these effluents before they are disposed of. One of the most promising and low cost approaches is the degradation of phenolic compounds via photocatalysis. This work, in particular, has as the main goal, the customization of a bench scale photoreactor and the preparation of catalysts via utilization of char originated from the fast pyrolysis of sewage sludge. The experiments were carried out at constant temperature (50°C) under oxygen (410, 515, 650 and 750 ml min-1). The reaction took place in the liquid phase (3.4 liters), where the catalyst concentration was 1g L-1 and the initial concentration of phenol was 500 mg L-1 and the reaction time was set to 3 hours. A 400 W lamp was adapted to the reactor. The flow of oxygen was optimized to 650 ml min-1. The pH of the liquid and the nature of the catalyst (acidified and calcined palygorskite, palygorskite impregnated with 3.8% Fe and the pyrolysis char) were investigated. The catalytic materials were characterized by XRD, XRF, and BET. In the process of photocatalytic degradation of phenol, the results showed that the pH has a significant influence on the phenol conversion, with best results for pH equal to 5.5. The phenol conversion ranged from 51.78% for the char sewage sludge to 58.02% (for palygorskite acidified calcined). Liquid samples analyzed by liquid chromatography and the following compounds were identified: hydroquinone, catechol and maleic acid. A mechanism of the reaction was proposed, whereas the phenol is transformed into the homogeneous phase and the others react on the catalyst surface. For the latter, the Langmuir-Hinshelwood model was applied, whose mass balances led to a system of differential equations and these were solved using numerical methods in order to get estimates for the kinetic and adsorption parameters. The model was adjusted satisfactorily to the experimental results. From the proposed mechanism and the operating conditions used in this study, the most favored step, regardless of the catalyst, was the acid group (originated from quinone compounds), being transformed into CO2 and water, whose rate constant k4 presented value of 0.578 mol L-1 min-1 for acidified calcined palygorskite, 0.472 mol L-1 min-1 for Fe2O3/palygorskite and 1.276 mol L-1 min-1 for the sludge to char, the latter being the best catalyst for mineralization of acid to CO2 and water. The quinones were adsorbed to the acidic sites of the calcined palygorskite and Fe2O3/palygorskite whose adsorption constants were similar (~ 4.45 L mol-1) and higher than that of the sewage sludge char (3.77 L mol-1).
Resumo:
The uncontrolled disposal of wastewaters containing phenolic compounds by the industry has caused irreversible damage to the environment. Because of this, it is now mandatory to develop new methods to treat these effluents before they are disposed of. One of the most promising and low cost approaches is the degradation of phenolic compounds via photocatalysis. This work, in particular, has as the main goal, the customization of a bench scale photoreactor and the preparation of catalysts via utilization of char originated from the fast pyrolysis of sewage sludge. The experiments were carried out at constant temperature (50°C) under oxygen (410, 515, 650 and 750 ml min-1). The reaction took place in the liquid phase (3.4 liters), where the catalyst concentration was 1g L-1 and the initial concentration of phenol was 500 mg L-1 and the reaction time was set to 3 hours. A 400 W lamp was adapted to the reactor. The flow of oxygen was optimized to 650 ml min-1. The pH of the liquid and the nature of the catalyst (acidified and calcined palygorskite, palygorskite impregnated with 3.8% Fe and the pyrolysis char) were investigated. The catalytic materials were characterized by XRD, XRF, and BET. In the process of photocatalytic degradation of phenol, the results showed that the pH has a significant influence on the phenol conversion, with best results for pH equal to 5.5. The phenol conversion ranged from 51.78% for the char sewage sludge to 58.02% (for palygorskite acidified calcined). Liquid samples analyzed by liquid chromatography and the following compounds were identified: hydroquinone, catechol and maleic acid. A mechanism of the reaction was proposed, whereas the phenol is transformed into the homogeneous phase and the others react on the catalyst surface. For the latter, the Langmuir-Hinshelwood model was applied, whose mass balances led to a system of differential equations and these were solved using numerical methods in order to get estimates for the kinetic and adsorption parameters. The model was adjusted satisfactorily to the experimental results. From the proposed mechanism and the operating conditions used in this study, the most favored step, regardless of the catalyst, was the acid group (originated from quinone compounds), being transformed into CO2 and water, whose rate constant k4 presented value of 0.578 mol L-1 min-1 for acidified calcined palygorskite, 0.472 mol L-1 min-1 for Fe2O3/palygorskite and 1.276 mol L-1 min-1 for the sludge to char, the latter being the best catalyst for mineralization of acid to CO2 and water. The quinones were adsorbed to the acidic sites of the calcined palygorskite and Fe2O3/palygorskite whose adsorption constants were similar (~ 4.45 L mol-1) and higher than that of the sewage sludge char (3.77 L mol-1).