993 resultados para Software Complexity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have recently proposed an extension to Petri nets in order to be able to directly deal with all aspects of embedded digital systems. This extension is meant to be used as an internal model of our co-design environment. After analyzing relevant related work, and presenting a short introduction to our extension as a background material, we describe the details of the timing model we use in our approach, which is mainly based in Merlin's time model. We conclude the paper by discussing an example of its usage. © 2004 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When project managers determine schedules for resource-constrained projects, they commonly use commercial project management software packages. Which resource-allocation methods are implemented in these packages is proprietary information. The resource-allocation problem is in general computationally difficult to solve to optimality. Hence, the question arises if and how various project management software packages differ in quality with respect to their resource-allocation capabilities. None of the few existing papers on this subject uses a sizeable data set and recent versions of common software packages. We experimentally analyze the resource-allocation capabilities of Acos Plus.1, AdeptTracker Professional, CS Project Professional, Microsoft Office Project 2007, Primavera P6, Sciforma PS8, and Turbo Project Professional. Our analysis is based on 1560 instances of the precedence- and resource-constrained project scheduling problem RCPSP. The experiment shows that using the resource-allocation feature of these packages may lead to a project duration increase of almost 115% above the best known feasible schedule. The increase gets larger with increasing resource scarcity and with increasing number of activities. We investigate the impact of different complexity scenarios and priority rules on the project duration obtained by the software packages. We provide a decision table to support managers in selecting a software package and a priority rule.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article introduces the E-learning Circle, a tool developed to assure the quality of the software design process of e-learning systems, considering pedagogical principles as well as technology. The E-learning Circle consists of a number of concentric circles which are divided into three sectors. The content of the inner circles is based on pedagogical principles, while the outer circle specifies how the pedagogical principles may be implemented with technology. The circle’s centre is dedicated to the subject taught, ensuring focus on the specific subject’s properties. The three sectors represent the student, the teacher and the learning objectives. The strengths of the E-learning Circle are the compact presentation combined with the overview it provides, as well as the usefulness of a design tool dealing with complexity, providing a common language and embedding best practice. The E-learning Circle is not a prescriptive method, but is useful in several design models and processes. The article presents two projects where the E-learning Circle was used as a design tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article aims to provide courts and policymakers with an analytical framework that, building upon the traditional rationales of IP exhaustion doctrine, identifies factors which advocate for a modulation or flexibilization of the role of exhaustion in copyright law. Factors include (i) the personal features of acquirers of copies of copyrighted works, distinguishing between consumers and commercial users; (ii) whether post-sale restrictions have been adequately communicated to acquirers and have been agreed in the contract or license; (iii) the degree of complexity of the acquired goods and their prospects of productive uses and interoperability; (iv) the role of other exclusive rights in providing rightholders with indirect control over uses of the copies in the aftermarket; (v) the impact of post-sale restraints in preventing opportunism in long-term contracts and in reducing deadweight losses created by IP pricing; and (vi) the temporal scope of post-sale restraints. After setting out this analytical framework, the ECJ Judgement in Oracle v. UsedSoft is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Individual learning is central to the success of the transition phase in software mainte-nance offshoring projects. However, little is known on how learning activities, such as on-the-job training and formal presentations, are effectively combined during the tran-sition phase. In this study, we present and test propositions derived from cognitive load theory. The results of a multiple-case study suggest that learning effectiveness was highest when learning tasks such as authentic maintenance requests were used. Con-sistent with cognitive load theory, learning tasks were most effective when they imposed moderate cognitive load. Our data indicate that cognitive load was influenced by the expertise of the onsite coordinator, by intrinsic task complexity, by the degree of specifi-cation of tasks, and by supportive information. Cultural and semantic distances may in-fluence learning by inhibiting supportive information, specification, and the assignment of learning tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing practice of offshore outsourcing software maintenance has posed the challenge of effectively transferring knowledge to individual software engineers of the vendor. In this theoretical paper, we discuss the implications of two learning theories, the model of work-based learning (MWBL) and cognitive load theory (CLT), for knowledge transfer during the transition phase. Taken together, the theories suggest that learning mechanisms need to be aligned with the type of knowledge (tacit versus explicit), task characteristics (complexity and recurrence), and the recipients’ expertise. The MWBL proposes that learning mechanisms need to include conceptual and practical activities based on the relative importance of explicit and tacit knowledge. CLT explains how effective portfolios of learning mechanisms change over time. While jobshadowing, completion tasks, and supportive information may prevail at the outset of transition, they may be replaced by the work on conventional tasks towards the end of transition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present results of a benchmark test evaluating the resource allocation capabilities of the project management software packages Acos Plus.1 8.2, CA SuperProject 5.0a, CS Project Professional 3.0, MS Project 2000, and Scitor Project Scheduler 8.0.1. The tests are based on 1560 instances of precedence– and resource–constrained project scheduling problems. For different complexity scenarios, we analyze the deviation of the makespan obtained by the software packages from the best feasible makespan known. Among the tested software packages, Acos Plus.1 and Project Scheduler show the best resource allocation performance. Moreover, our numerical analysis reveals a considerable performance gap between the implemented methods and state–of–the–art project scheduling algorithms, especially for large–sized problems. Thus, there is still a significant potential for improving solutions to resource allocation problems in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oligonucleotides comprising unnatural building blocks, which interfere with the translation machinery, have gained increased attention for the treatment of gene-related diseases (e.g. antisense, RNAi). Due to structural modifications, synthetic oligonucleotides exhibit increased biostability and bioavailability upon administration. Consequently, classical enzyme-based sequencing methods are not applicable to their sequence elucidation and verification. Tandem mass spectrometry is the method of choice for performing such tasks, since gas-phase dissociation is not restricted to natural nucleic acids. However, tandem mass spectrometric analysis can generate product ion spectra of tremendous complexity, as the number of possible fragments grows rapidly with increasing sequence length. The fact that structural modifications affect the dissociation pathways greatly increases the variety of analytically valuable fragment ions. The gas-phase dissociation of oligonucleotides is characterized by the cleavage of one of the four bonds along the phosphodiester chain, by the accompanying loss of nucleases, and by the generation of internal fragments due to secondary backbone cleavage. For example, an 18-mer oligonucleotide yields a total number of 272’920 theoretical fragment ions. In contrast to the processing of peptide product ion spectra, which nowadays is highly automated, there is a lack of tools assisting the interpretation of oligonucleotide data. The existing web-based and stand-alone software applications are primarily designed for the sequence analysis of natural nucleic acids, but do not account for chemical modifications and adducts. Consequently, we developed a software to support the interpretation of mass spectrometric data of natural and modified nucleic acids and their adducts with chemotherapeutic agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing complexity of current software systems is encouraging the development of self-managed software architectures, i.e. systems capable of reconfiguring their structure at runtime to fulfil a set of goals. Several approaches have covered different aspects of their development, but some issues remain open, such as the maintainability or the scalability of self-management subsystems. Centralized approaches, like self-adaptive architectures, offer good maintenance properties but do not scale well for large systems. On the contrary, decentralized approaches, like self-organising architectures, offer good scalability but are not maintainable: reconfiguration specifications are spread and often tangled with functional specifications. In order to address these issues, this paper presents an aspect-oriented autonomic reconfiguration approach where: (1) each subsystem is provided with self-management properties so it can evolve itself and the components that it is composed of; (2) self-management concerns are isolated and encapsulated into aspects, thus improving its reuse and maintenance. Povzetek: Predstavljen je pristop s samo-preoblikovanjem programske arhitekture.