954 resultados para software-defined network
Resumo:
This paper introduces a mechanism for generating a series of rules that characterize the money price relationship for the USA, defined as the relationship between the rate of growth of the money supply and inflation. Monetary component data is used to train a selection of candidate feedforward neural networks. The selected network is mined for rules, expressed in human-readable and machine-executable form. The rule and network accuracy are compared, and expert commentary is made on the readability and reliability of the extracted rule set. The ultimate goal of this research is to produce rules that meaningfully and accurately describe inflation in terms of the monetary component dataset.
Resumo:
This paper explores the use of the optimization procedures in SAS/OR software with application to the contemporary logistics distribution network design using an integrated multiple criteria decision making approach. Unlike the traditional optimization techniques, the proposed approach, combining analytic hierarchy process (AHP) and goal programming (GP), considers both quantitative and qualitative factors. In the integrated approach, AHP is used to determine the relative importance weightings or priorities of alternative warehouses with respect to both deliverer oriented and customer oriented criteria. Then, a GP model incorporating the constraints of system, resource, and AHP priority is formulated to select the best set of warehouses without exceeding the limited available resources. To facilitate the use of integrated multiple criteria decision making approach by SAS users, an ORMCDM code was implemented in the SAS programming language. The SAS macro developed in this paper selects the chosen variables from a SAS data file and constructs sets of linear programming models based on the selected GP model. An example is given to illustrate how one could use the code to design the logistics distribution network.
Resumo:
We investigate knowledge exchange among commercial organizations, the rationale behind it, and its effects on the market. Knowledge exchange is known to be beneficial for industry, but in order to explain it, authors have used high-level concepts like network effects, reputation, and trust. We attempt to formalize a plausible and elegant explanation of how and why companies adopt information exchange and why it benefits the market as a whole when this happens. This explanation is based on a multiagent model that simulates a market of software providers. Even though the model does not include any high-level concepts, information exchange naturally emerges during simulations as a successful profitable behavior. The conclusions reached by this agent-based analysis are twofold: 1) a straightforward set of assumptions is enough to give rise to exchange in a software market, and 2) knowledge exchange is shown to increase the efficiency of the market.
Resumo:
This paper describes work conducted as a joint collaboration between the Virtual Design Team (VDT) research group at Stanford University (USA) , the Systems Engineering Group (SEG) at De Montfort University (UK) and Elipsis Ltd . We describe a new docking methodology in which we combine the use of two radically different types of organizational simulation tool. The VDT simulation tool operates on a standalone computer, and employs computational agents during simulated execution of a pre-defined process model (Kunz, 1998). The other software tool, DREAMS , operates over a standard TCP/IP network, and employs human agents (real people) during a simulated execution of a pre-defined process model (Clegg, 2000).
Resumo:
The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.
Resumo:
As a discipline, supply chain management (SCM) has traditionally been primarily concerned with the procurement, processing, movement and sale of physical goods. However an important class of products has emerged - digital products - which cannot be described as physical as they do not obey commonly understood physical laws. They do not possess mass or volume, and they require no energy in their manufacture or distribution. With the Internet, they can be distributed at speeds unimaginable in the physical world, and every copy produced is a 100% perfect duplicate of the original version. Furthermore, the ease with which digital products can be replicated has few analogues in the physical world. This paper assesses the effect of non-physicality on one such product – software – in relation to the practice of SCM. It explores the challenges that arise when managing the software supply chain and how practitioners are addressing these challenges. Using a two-pronged exploratory approach that examines the literature around software management as well as direct interviews with software distribution practitioners, a number of key challenges associated with software supply chains are uncovered, along with responses to these challenges. This paper proposes a new model for software supply chains that takes into account the non-physicality of the product being delivered. Central to this model is the replacement of physical flows with flows of intellectual property, the growing importance of innovation over duplication and the increased centrality of the customer in the entire process. Hybrid physical / digital supply chains are discussed and a framework for practitioners concerned with software supply chains is presented.
Resumo:
Possibilities for investigations of 43 varieties of file formats (objects), joined in 10 groups; 89 information attacks, joined in 33 groups and 73 methods of compression, joined in 10 groups are described in the paper. Experimental, expert, possible and real relations between attacks’ groups, method’ groups and objects’ groups are determined by means of matrix transformations and the respective maximum and potential sets are defined. At the end assessments and conclusions for future investigation are proposed.
Resumo:
This article introduces a small setting case study about the benefits of using TSPi in a software project. An adapted process from the current process based on the TSPi was defined. The pilot project had schedule and budget constraints. The process began by gathering historical data from previous projects in order to get a measurement repository. The project was launched with the following goals: increase the productivity, reduce the test time and improve the product quality. Finally, the results were analysed and the goals were verified.
Resumo:
In the area of Software Engineering, traceability is defined as the capability to track requirements, their evolution and transformation in different components related to engineering process, as well as the management of the relationships between those components. However the current state of the art in traceability does not keep in mind many of the elements that compose a product, specially those created before requirements arise, nor the appropriated use of traceability to manage the knowledge underlying in order to be handled by other organizational or engineering processes. In this work we describe the architecture of a reference model that establishes a set of definitions, processes and models which allow a proper management of traceability and further uses of it, in a wider context than the one related to software development.
Resumo:
This paper is dedicated to modelling of network maintaining based on live example – maintaining ATM banking network, where any problems are mean money loss. A full analysis is made in order to estimate valuable and not-valuable parameters based on complex analysis of available data. Correlation analysis helps to estimate provided data and to produce a complex solution of increasing network maintaining effectiveness.
Resumo:
Educational games such as quizzes, quests, puzzles, mazes and logical problems may be modeled as multimedia board games. In the scope of the ADOPTA project1 being under development at the Faculty of Mathematics and Informatics at Sofia University, a formal model for presentation of such educational board games was invented and elaborated. Educational games can be modeled as special board mini-games, with a board of any form and any types of positions. Over defined positions, figures (objects) with certain properties are placed and, next, there are to be defined formal rules for manipulation of these figures and resulted effects. The model has been found to be general enough in order to allow description and execution control of more complex logical problems to be solved by several actions delivered to/by the player according some formal rules and context conditions and, in general, of any learning activities and their workflow. It is used as a base for creation of a software platform providing facilities for easy construction of multimedia board games and their execution. The platform consists of game designer (i.e., a game authoring tool) and game run-time controller communicating each other through game repository. There are created and modeled many examples of educational board games appropriate for didactic purposes, self evaluations, etc., which are supposed to be designed easily by authors with no IT skills and experience. By means of game metadata descriptions, these games are going be included into narrative storyboards and, next, delivered to learners with appropriate profile according their learning style, preferences, etc. Moreover, usage of artificial intelligence agents is planned as well – once as playing virtual opponents of the player or, otherwise, being virtual advisers of the gamer helping him/her in finding the right problem solution within given domain such as discovering a treasure using a location map, finding best tour in a virtual museum, guessing an unknown word in a hangman game, and many others.
Resumo:
High-quality software documentation is a substantial issue for understanding software systems. Shorter time-to-market software cycles increase the importance of automatism for keeping the documentation up to date. In this paper, we describe the automatic support of the software documentation process using semantic technologies. We introduce a software documentation ontology as an underlying knowledge base. The defined ontology is populated automatically by analysing source code, software documentation and code execution. Through selected results we demonstrate that the use of such semantic systems can support software documentation processes efficiently.
Resumo:
This paper looks at potential distribution network stability problems under the Smart Grid scenario. This is to consider distributed energy resources (DERs) e.g. renewable power generations and intelligent loads with power-electronic controlled converters. The background of this topic is introduced and potential problems are defined from conventional power system stability and power electronic system stability theories. Challenges are identified with possible solutions from steady-state limits, small-signal, and large-signal stability indexes and criteria. Parallel computation techniques might be included for simulation or simplification approaches are required for a largescale distribution network analysis.
Resumo:
ACM Computing Classification System (1998): D.2.5, D.2.9, D.2.11.
Resumo:
The realisation of an eventual low-voltage (LV) Smart Grid with a complete communication infrastructure is a gradual process. During this evolution the protection scheme of distribution networks should be continuously adapted and optimised to fit the protection and cost requirements at the time. This paper aims to review practices and research around the design of an effective, adaptive and economical distribution network protection scheme. The background of this topic is introduced and potential problems are defined from conventional protection theories and new Smart Grid technologies. Challenges are identified with possible solutions defined as a pathway to the ultimate flexible and reliable LV protection systems.