9 resultados para Capability Maturity Model for Software
em Universidad de Alicante
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
Hardware/Software partitioning (HSP) is a key task for embedded system co-design. The main goal of this task is to decide which components of an application are to be executed in a general purpose processor (software) and which ones, on a specific hardware, taking into account a set of restrictions expressed by metrics. In last years, several approaches have been proposed for solving the HSP problem, directed by metaheuristic algorithms. However, due to diversity of models and metrics used, the choice of the best suited algorithm is an open problem yet. This article presents the results of applying a fuzzy approach to the HSP problem. This approach is more flexible than many others due to the fact that it is possible to accept quite good solutions or to reject other ones which do not seem good. In this work we compare six metaheuristic algorithms: Random Search, Tabu Search, Simulated Annealing, Hill Climbing, Genetic Algorithm and Evolutionary Strategy. The presented model is aimed to simultaneously minimize the hardware area and the execution time. The obtained results show that Restart Hill Climbing is the best performing algorithm in most cases.
Empirical study on the maintainability of Web applications: Model-driven Engineering vs Code-centric
Resumo:
Model-driven Engineering (MDE) approaches are often acknowledged to improve the maintainability of the resulting applications. However, there is a scarcity of empirical evidence that backs their claimed benefits and limitations with respect to code-centric approaches. The purpose of this paper is to compare the performance and satisfaction of junior software maintainers while executing maintainability tasks on Web applications with two different development approaches, one being OOH4RIA, a model-driven approach, and the other being a code-centric approach based on Visual Studio .NET and the Agile Unified Process. We have conducted a quasi-experiment with 27 graduated students from the University of Alicante. They were randomly divided into two groups, and each group was assigned to a different Web application on which they performed a set of maintainability tasks. The results show that maintaining Web applications with OOH4RIA clearly improves the performance of subjects. It also tips the satisfaction balance in favor of OOH4RIA, although not significantly. Model-driven development methods seem to improve both the developers’ objective performance and subjective opinions on ease of use of the method. This notwithstanding, further experimentation is needed to be able to generalize the results to different populations, methods, languages and tools, different domains and different application sizes.
Resumo:
El particionado hardware/software es una tarea fundamental en el co-diseño de sistemas embebidos. En ella se decide, teniendo en cuenta las métricas de diseño, qué componentes se ejecutarán en un procesador de propósito general (software) y cuáles en un hardware específico. En los últimos años se han propuesto diversas soluciones al problema del particionado dirigidas por algoritmos metaheurísticos. Sin embargo, debido a la diversidad de modelos y métricas utilizadas, la elección del algoritmo más apropiado sigue siendo un problema abierto. En este trabajo se presenta una comparación de seis algoritmos metaheurísticos: Búsqueda aleatoria (Random search), Búsqueda tabú (Tabu search), Recocido simulado (Simulated annealing), Escalador de colinas estocástico (Stochastic hill climbing), Algoritmo genético (Genetic algorithm) y Estrategia evolutiva (Evolution strategy). El modelo utilizado en la comparación está dirigido a minimizar el área ocupada y el tiempo de ejecución, las restricciones del modelo son consideradas como penalizaciones para incluir en el espacio de búsqueda otras soluciones. Los resultados muestran que los algoritmos Escalador de colinas estocástico y Estrategia evolutiva son los que mejores resultados obtienen en general, seguidos por el Algoritmo genético.
Resumo:
Context: Today’s project managers have a myriad of methods to choose from for the development of software applications. However, they lack empirical data about the character of these methods in terms of usefulness, ease of use or compatibility, all of these being relevant variables to assess the developer’s intention to use them. Objective: To compare three methods, each following a different paradigm (Model-Driven, Model-Based and Code-Centric) with respect to their adoption potential by junior software developers engaged in the development of the business layer of a Web 2.0 application. Method: We have conducted a quasi-experiment with 26 graduate students of the University of Alicante. The application developed was a Social Network, which was organized around a fixed set of modules. Three of them, similar in complexity, were used for the experiment. Subjects were asked to use a different method for each module, and then to answer a questionnaire that gathered their perceptions during such use. Results: The results show that the Model-Driven method is regarded as the most useful, although it is also considered the least compatible with previous developers’ experiences. They also show that junior software developers feel comfortable with the use of models, and that they are likely to use them if the models are accompanied by a Model-Driven development environment. Conclusions: Despite their relatively low level of compatibility, Model-Driven development methods seem to show a great potential for adoption. That said, however, further experimentation is needed to make it possible to generalize the results to a different population, different methods, other languages and tools, different domains or different application sizes.
Resumo:
A single and very easy to use Graphical User Interface (GUI- MATLAB) based on the topological information contained in the Gibbs energy of mixing function has been developed as a friendly tool to check the coherence of NRTL parameters obtained in a correlation data procedure. Thus, the analysis of the GM/RT surface, the GM/RT for the binaries and the GM/RT in planes containing the tie lines should be necessary to validate the obtained parameters for the different models for correlating phase equlibrium data.
Resumo:
In order to build dynamic models for prediction and management of degraded Mediterranean forest areas was necessary to build MARIOLA model, which is a calculation computer program. This model includes the following subprograms. 1) bioshrub program, which calculates total, green and woody shrubs biomass and it establishes the time differences to calculate the growth. 2) selego program, which builds the flow equations from the experimental data. It is based on advanced procedures of statistical multiple regression. 3) VEGETATION program, which solves the state equations with Euler or Runge-Kutta integration methods. Each one of these subprograms can act as independent or as linked programs.
Resumo:
Purpose: The P23H rhodopsin mutation is an autosomal dominant cause of retinitis pigmentosa (RP). The degeneration can be tracked using different anatomical and functional methods. In our case, we evaluated the anatomical changes using Spectral-Domain Optical Coherence Tomography (SD-OCT) and correlated the findings with retinal thickness values determined by immunocytochemistry.Methods: Pigmented rats heterozygous for the P23H mutation, with ages between P18 and P180 were studied. Function was assessed by means of optomotor testing and ERGs. Retinal thicknesses measurements, autofluorescence and fluorescein angiography were performed using Spectralis OCT. Retinas were studied by means of immunohistochemistry. Results: Between P30 and P180, visual acuity decreased from 0.500 to 0.182 cycles per degree (cyc/deg) and contrast sensitivity decreased from 54.56 to 2.98 for a spatial frequency of 0.089 cyc/deg. Only cone-driven b-wave responses reached developmental maturity. Flicker fusions were also comparable at P29 (42 Hz). Double flash-isolated rod-driven responses were already affected at P29. Photopic responses revealed deterioration after P29.A reduction in retinal thicknesses and morphological modifications were seen in OCT sections. Statistically significant differences were found in all evaluated thicknesses. Autofluorescence was seen in P23H rats as sparse dots. Immunocytochemistry showed a progressive decrease in the outer nuclear layer (ONL), and morphological changes. Although anatomical thickness measures were significantly lower than OCT values, there was a very strong correlation between the values measured by both techniques.Conclusions: In pigmented P23H rats, a progressive deterioration occurs in both retinal function and anatomy. Anatomical changes can be effectively evaluated using SD-OCT and immunocytochemistry, with a good correlation between their values, thus making SD-OCT an important tool for research in retinal degeneration.