925 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::PROBABILIDADE E ESTATISTICA::ESTATISTICA


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes a model based approach for pointcut management in the presence of evolution in aspect oriented systems. The proposed approach, called conceptual visions based pointcuts, is motivated by the observation of the shortcomings in traditional approaches pointcuts definition, which generally refer directly to software structure and/or behavior, thereby creating a strong coupling between pointcut definition and the base code. This coupling causes the problem known as pointcut fragility problem and hinders the evolution of aspect-oriented systems. This problem occurs when all the pointcuts of each aspect should be reviewed due to any software changes/evolution, to ensure that they remain valid even after the changes made in the software. Our approach is focused on the pointcuts definition based on a conceptual model, which has definitions of the system's structure in a more abstract level. The conceptual model consists of classifications (called conceptual views) on entities of the business model elements based on common characteristics, and relationships between these views. Thus the pointcuts definitions are created based on the conceptual model rather than directly referencing the base model. Moreover, the conceptual model contains a set of relationships that allows it to be automatically verified if the classifications in the conceptual model remain valid even after a software change. To this end, all the development using the conceptual views based pointcuts approach is supported by a conceptual framework called CrossMDA2 and a development process based on MDA, both also proposed in this work. As proof of concept, we present two versions of a case study, setting up a scenario of evolution that shows how the use of conceptual visions based pointcuts helps detecting and minimizing the pointcuts fragility. For the proposal evaluation the Goal/Question/Metric (GQM) technique is used together with metrics for efficiency analysis in the pointcuts definition

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on Wireless Sensor Networks (WSN) has evolved, with potential applications in several domains. However, the building of WSN applications is hampered by the need of programming in low-level abstractions provided by sensor OS and of specific knowledge about each application domain and each sensor platform. We propose a MDA approach do develop WSN applications. This approach allows domain experts to directly contribute in the developing of applications without needing low level knowledge on WSN platforms and, at the same time, it allows network experts to program WSN nodes to met application requirements without specific knowledge on the application domain. Our approach also promotes the reuse of the developed software artifacts, allowing an application model to be reused across different sensor platforms and a platform model to be reused for different applications

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The course of Algorithms and Programming reveals as real obstacle for many students during the computer courses. The students not familiar with new ways of thinking required by the courses as well as not having certain skills required for this, encounter difficulties that sometimes result in the repetition and dropout. Faced with this problem, that survey on the problems experienced by students was conducted as a way to understand the problem and to guide solutions in trying to solve or assuage the difficulties experienced by students. In this paper a methodology to be applied in a classroom based on the concepts of Meaningful Learning of David Ausubel was described. In addition to this theory, a tool developed at UFRN, named Takkou, was used with the intent to better motivate students in algorithms classes and to exercise logical reasoning. Finally a comparative evaluation of the suggested methodology and traditional methodology was carried out, and results were discussed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Multiobjective Spanning Tree is a NP-hard Combinatorial Optimization problem whose application arises in several areas, especially networks design. In this work, we propose a solution to the biobjective version of the problem through a Transgenetic Algorithm named ATIS-NP. The Computational Transgenetic is a metaheuristic technique from Evolutionary Computation whose inspiration relies in the conception of cooperation (and not competition) as the factor of main influence to evolution. The algorithm outlined is the evolution of a work that has already yielded two other transgenetic algorithms. In this sense, the algorithms previously developed are also presented. This research also comprises an experimental analysis with the aim of obtaining information related to the performance of ATIS-NP when compared to other approaches. Thus, ATIS-NP is compared to the algorithms previously implemented and to other transgenetic already presented for the problem under consideration. The computational experiments also address the comparison to two recent approaches from literature that present good results, a GRASP and a genetic algorithms. The efficiency of the method described is evaluated with basis in metrics of solution quality and computational time spent. Considering the problem is within the context of Multiobjective Optimization, quality indicators are adopted to infer the criteria of solution quality. Statistical tests evaluate the significance of results obtained from computational experiments

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many challenges have been imposed on the middleware to support applications for digital TV because of the heterogeneity and resource constraints of execution platforms. In this scenario, the middleware must be highly configurable so that it can be customized to meet the requirements of applications and underlying platforms. This work aims to present the GingaForAll, a software product line developed for the Ginga - the middleware of the Brazilian Digital TV (SBTVD). GingaForAll adds the concepts of software product line, aspect orientation and model-driven development to allow: (i) the specification of the common characteristics and variables of the middleware, (ii) the modularization of crosscutting concerns - both mandatory and concepts variables - through aspects, (iii) the expression of concepts as a set of models that increase the level of abstraction and enables management of various software artifacts in terms of configurable models. This work presents the architecture of the software product line that implements such a tool and architecture that supports automatic customization of middleware. The work also presents a tool that implements the process of generating products GingaForAll

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a contribution to the international Verified Software Repository effort through the formal specification of the microkernel FreeRTOS real-time system. Such specification was made in abstract level making use of the B method . For thus, properties of the microkernel were chosen and selected as specification requisites, which was constructed centered at the functionalities responsible for the utilization of these properties. This properties weres setting as specification requirements. The specification was constructed modeling the function of microkernel that implement this properties. This work intended to encourage the formal verification of FreeRTOS and also contribute to the formal creation of a microkernel real-time systems, based in FreeRTOS. Furthermore, this model brings a formal documentation point view of the microkernel, demonstrating features and how this internal states is changing. Finally, this work could be an example of specification of the actual system by the B method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On the last years, several middleware platforms for Wireless Sensor Networks (WSN) were proposed. Most of these platforms does not consider issues of how integrate components from generic middleware architectures. Many requirements need to be considered in a middleware design for WSN and the design, in this case, it is possibility to modify the source code of the middleware without changing the external behavior of the middleware. Thus, it is desired that there is a middleware generic architecture that is able to offer an optimal configuration according to the requirements of the application. The adoption of middleware based in component model consists of a promising approach because it allows a better abstraction, low coupling, modularization and management features built-in middleware. Another problem present in current middleware consists of treatment of interoperability with external networks to sensor networks, such as Web. Most current middleware lacks the functionality to access the data provided by the WSN via the World Wide Web in order to treat these data as Web resources, and they can be accessed through protocols already adopted the World Wide Web. Thus, this work presents the Midgard, a component-based middleware specifically designed for WSNs, which adopts the architectural patterns microkernel and REST. The microkernel architectural complements the component model, since microkernel can be understood as a component that encapsulates the core system and it is responsible for initializing the core services only when needed, as well as remove them when are no more needed. Already REST defines a standardized way of communication between different applications based on standards adopted by the Web and enables him to treat WSN data as web resources, allowing them to be accessed through protocol already adopted in the World Wide Web. The main goals of Midgard are: (i) to provide easy Web access to data generated by WSN, exposing such data as Web resources, following the principles of Web of Things paradigm and (ii) to provide WSN application developer with capabilities to instantiate only specific services required by the application, thus generating a customized middleware and saving node resources. The Midgard allows use the WSN as Web resources and still provide a cohesive and weakly coupled software architecture, addressing interoperability and customization. In addition, Midgard provides two services needed for most WSN applications: (i) configuration and (ii) inspection and adaptation services. New services can be implemented by others and easily incorporated into the middleware, because of its flexible and extensible architecture. According to the assessment, the Midgard provides interoperability between the WSN and external networks, such as web, as well as between different applications within a single WSN. In addition, we assessed the memory consumption, the application image size, the size of messages exchanged in the network, and response time, overhead and scalability on Midgard. During the evaluation, the Midgard proved satisfies their goals and shown to be scalable without consuming resources prohibitively

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays several electronics devices support digital videos. Some examples of these devices are cellphones, digital cameras, video cameras and digital televisions. However, raw videos present a huge amount of data, millions of bits, for their representation as the way they were captured. To store them in its primary form it would be necessary a huge amount of disk space and a huge bandwidth to allow the transmission of these data. The video compression becomes essential to make possible information storage and transmission. Motion Estimation is a technique used in the video coder that explores the temporal redundancy present in video sequences to reduce the amount of data necessary to represent the information. This work presents a hardware architecture of a motion estimation module for high resolution videos according to H.264/AVC standard. The H.264/AVC is the most advanced video coder standard, with several new features which allow it to achieve high compression rates. The architecture presented in this work was developed to provide a high data reuse. The data reuse schema adopted reduces the bandwidth required to execute motion estimation. The motion estimation is the task responsible for the largest share of the gains obtained with the H.264/AVC standard so this module is essential for final video coder performance. This work is included in Rede H.264 project which aims to develop Brazilian technology for Brazilian System of Digital Television

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using formal methods, the developer can increase software s trustiness and correctness. Furthermore, the developer can concentrate in the functional requirements of the software. However, there are many resistance in adopting this software development approach. The main reason is the scarcity of adequate, easy to use, and useful tools. Developers typically write code and test it. These tests usually consist of executing the program and checking its output against its requirements. This, however, is not always an exhaustive discipline. On the other side, using formal methods one might be able to investigate the system s properties further. Unfortunately, specification languages do not always have tools like animators or simulators, and sometimes there are no friendly Graphical User Interfaces. On the other hand, specification languages usually have a compiler which normally generates a Labeled Transition System (LTS). This work proposes an application that provides graphical animation for formal specifications using the LTS as input. The application initially supports the languages B, CSP, and Z. However, using a LTS in a specified XML format, it is possible to animate further languages. Additionally, the tool provides traces visualization, the choices the user did, in a graphical tree. The intention is to improve the comprehension of a specification by providing information about errors and animating it, as the developers do for programming languages, such as Java and C++.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation aims at extending the JCircus tool, a translator of formal specifications into code that receives a Circus specification as input, and translates the specification into Java code. Circus is a formal language whose syntax is based on Z s and CSP s syntax. JCircus generated code uses JCSP, which is a Java API that implements CSP primitives. As JCSP does not implement all CSP s primitives, the translation strategy from Circus to Java is not trivial. Some CSP primitives, like parallelism, external choice, communication and multi-synchronization are partially implemented. As an aditional scope, this dissertation will also develop a tool for testing JCSP programs, called JCSPUnit, which will also be included in JCircus new version. The extended version of JCircus will be called JCircus 2.0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents an ontology to describe the semantics of IMML (Interactive Message Modeling Language) an XML-based User Interface Description Language. The ontology presents the description of all IMML elements including a natural language description and semantic rules and relationships. The ontology is implemented in OWL-DL, a standard language to ontology description that is recommended by W3C. Our main goal is to describe the semantic using languages and tools that can be processed by computers. As a consequence, we develop tools to the validation of a user interface specification and also to present the semantic description in different views

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of this work is to investigate the suitability of applying cluster ensemble techniques (ensembles or committees) to gene expression data. More specifically, we will develop experiments with three diferent cluster ensembles methods, which have been used in many works in literature: coassociation matrix, relabeling and voting, and ensembles based on graph partitioning. The inputs for these methods will be the partitions generated by three clustering algorithms, representing diferent paradigms: kmeans, ExpectationMaximization (EM), and hierarchical method with average linkage. These algorithms have been widely applied to gene expression data. In general, the results obtained with our experiments indicate that the cluster ensemble methods present a better performance when compared to the individual techniques. This happens mainly for the heterogeneous ensembles, that is, ensembles built with base partitions generated with diferent clustering algorithms

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software Product Line (SPL) consists of a software development paradigm, whose main focus is to identify features common and variability among applications in a specific domain. An LPS is designed to attend all products requirements from its product family. These requirements and LPS may have changes over time due to several factors, such as evolution of product requirements, evolution of the market, evolution of SLP process, evolution of the technologies used to develop the products. To handle these changes, LPS should be modified and evolve in order to not become obsolete, and adapt itself to new requirements. The Changes Impact Analysis is an activity that understand and identify what consequences these changes are cause on LPS. Impact Analysis on LPS may be supported by traceability relationships, which identify relationships between artefacts created during all phases of software development. Despite the solutions of change impact analysis based on traceability for software, there is a lack of solutions for assessing the change impact analysis based on traceability for LPS, since existing solutions do not include estimates specific to the artefacts of LPS. Thus, this paper proposes a process of change impact analysis and an tool for assessing the change impact through traceability of artefacts in LPS. For this purpose, we specified a process of change impact analysis that considers artifacts produced during the development of LPS. We have also implemented a tool which allows estimating and identifying artefacts and products of LPS affected from changes in other products, changes in class, changes in features, changes between releases of LPS and artefacts related to changes in core assets and variability. Finally, the results were evaluated through metrics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brazil is going through the process from analogical transmission to digital transmission. This new technology, in addition to providing a high quality audio and video, also allows applications to execute on television. Equipment called Set-Top Box are needed to allow the reception of this new signal and create the appropriate environment necessary to execute applications. At first, the only way to interact with these applications is given by remote control. However, the remote control has serious usability problems when used to interact with some types of applications. This research suggests a software resources implementation capable to create a environment that allows a smartphone to interact with applications. Besides this implementation, is performed a comparative study between use remote controle and smartphones to interact with applications of digital television, taking into account parameters related to usability. After analysis of data collected by the comparative study is possible to identify which device provides an interactive experience more interesting for users

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Product derivation tools are responsible for automating the development process of software product lines. The configuration knowledge, which is responsible for mapping the problem space to the solution space, plays a fundamental role on product derivation approaches. Each product derivation approach adopts different strategies and techniques to manage the existing variabilities in code assets. There is a lack of empirical studies to analyze these different approaches. This dissertation has the aim of comparing systematically automatic product derivation approaches through of the development of two different empirical studies. The studies are analyzed under two perspectives: (i) qualitative that analyzes the characteristics of approaches using specific criteria; and (ii) quantitative that quantifies specific properties of product derivation artifacts produced for the different approaches. A set of criteria and metrics are also being proposed with the aim of providing support to the qualitative and quantitative analysis. Two software product lines from the web and mobile application domains are targets of our study