991 resultados para parallel architecture
Resumo:
The aim of this dissertation is to bridge and synthesize the different streams of literature addressing ecosystem architecture through a multiple‐lens perspective. In addition, the structural properties of and processes to design and manage the architecture will be examined. With this approach, the oft‐neglected actor‐structure duality is addressed and both the position and structure, and action and process are under scrutiny. Further, the developed framework and empirical evidence offer valuable insights on how firms collectively create value and individually appropriate value. The dissertation is divided into two parts. The first part comprises a literature review, as well as the conclusions of the whole study, and the second part includes six research publications. The dissertation is based on three different reasoning logics: abduction, induction and deduction; related qualitative and quantitative methodologies are utilized in the empirical examination of the phenomenon in the information and communication technology industry. The results suggest firstly that there are endogenous and exogenous structural properties of the ecosystem architecture. Out of these, the former ones can be more easily influenced by a particular actor whereas the latter ones are taken more or less for granted. Secondly, the exogenous ecosystem design properties influence the value creation potential of the ecosystem whereas the endogenous ecosystem design properties influence the value appropriation potential of a particular actor in the ecosystem. Thirdly, the study suggests that there is a relationship between endogenous and exogenous structural properties in that the endogenous properties can be leveraged to create and reconfigure the exogenous properties whereas the exogenous properties prose opportunities and restrictions on the use of endogenous properties. In addition, the study suggests that there are different emergent and engineered processes to design and manage ecosystem architecture and to influence both the endogenous and exogenous structural properties of ecosystem architecture. This study makes three main contributions. First, on the conceptual level, it brings coherence and direction to the fast growing body of literature on novel inter‐organizational arrangements, such as ecosystems. It does this by bridging and synthetizing three different streams of literature, namely the boundary, design and orchestration conception. Secondly, it sets out a framework that enhances our understanding of the structural properties of ecosystem architecture; of the processes to design and manage ecosystem architecture; and of their influence on the value creation potential of the ecosystem and the value capture potential of a particular firm. Thirdly, it offers empirical evidence of the structural properties and processes.
Resumo:
Cyber security is one of the main topics that are discussed around the world today. The threat is real, and it is unlikely to diminish. People, business, governments, and even armed forces are networked in a way or another. Thus, the cyber threat is also facing military networking. On the other hand, the concept of Network Centric Warfare sets high requirements for military tactical data communications and security. A challenging networking environment and cyber threats force us to consider new approaches to build security on the military communication systems. The purpose of this thesis is to develop a cyber security architecture for military networks, and to evaluate the designed architecture. The architecture is described as a technical functionality. As a new approach, the thesis introduces Cognitive Networks (CN) which are a theoretical concept to build more intelligent, dynamic and even secure communication networks. The cognitive networks are capable of observe the networking environment, make decisions for optimal performance and adapt its system parameter according to the decisions. As a result, the thesis presents a five-layer cyber security architecture that consists of security elements controlled by a cognitive process. The proposed architecture includes the infrastructure, services and application layers that are managed and controlled by the cognitive and management layers. The architecture defines the tasks of the security elements at a functional level without introducing any new protocols or algorithms. For evaluating two separated method were used. The first method is based on the SABSA framework that uses a layered approach to analyze overall security of an organization. The second method was a scenario based method in which a risk severity level is calculated. The evaluation results show that the proposed architecture fulfills the security requirements at least at a high level. However, the evaluation of the proposed architecture proved to be very challenging. Thus, the evaluation results must be considered very critically. The thesis proves the cognitive networks are a promising approach, and they provide lots of benefits when designing a cyber security architecture for the tactical military networks. However, many implementation problems exist, and several details must be considered and studied during the future work.
Resumo:
This paper deals with the use of the conjugate gradient method of function estimation for the simultaneous identification of two unknown boundary heat fluxes in parallel plate channels. The fluid flow is assumed to be laminar and hydrodynamically developed. Temperature measurements taken inside the channel are used in the inverse analysis. The accuracy of the present solution approach is examined by using simulated measurements containing random errors, for strict cases involving functional forms with discontinuities and sharp-corners for the unknown functions. Three different types of inverse problems are addressed in the paper, involving the estimation of: (i) Spatially dependent heat fluxes; (ii) Time-dependent heat fluxes; and (iii) Time and spatially dependent heat fluxes.
Resumo:
In this paper we present an algorithm for the numerical simulation of the cavitation in the hydrodynamic lubrication of journal bearings. Despite the fact that this physical process is usually modelled as a free boundary problem, we adopted the equivalent variational inequality formulation. We propose a two-level iterative algorithm, where the outer iteration is associated to the penalty method, used to transform the variational inequality into a variational equation, and the inner iteration is associated to the conjugate gradient method, used to solve the linear system generated by applying the finite element method to the variational equation. This inner part was implemented using the element by element strategy, which is easily parallelized. We analyse the behavior of two physical parameters and discuss some numerical results. Also, we analyse some results related to the performance of a parallel implementation of the algorithm.
Resumo:
The evolution of digital circuit technology, leadind to higher speeds and more reliability allowed the development of machine controllers adapted to new production systems (e.g., Flexible Manufacturing Systems - FMS). Most of the controllers are developed in agreement with the CNC technology of the correspondent machine tool manufacturer. Any alterations or adaptation of their components are not easy to be implemented. The machine designers face up hardware and software restrictions such as lack of interaction among system's elements and impossibility of adding new function. This is due to hardware incompatibility and to software not allowing alterations in the source program. The introduction of open architecture philosophy propitiated the evolution of a new generation of numeric controllers. This brought the conventional CNC technology to the standard IBM - PC microcomputer. As a consequence, the characteristics of the CNC (positioning) and the microcomputer (easy of programming, system configuration, network communication etc) are combined. Some researchers have addressed a flexible structure of software and hardware allowing changes in the hardware basic configuration and all control software levels. In this work, the development of open architecture controllers in the OSACA, OMAC, HOAM-CNC and OSEC architectures is described.
Resumo:
In this paper we present a study of feasibility by using Cassino Parallel Manipulator (CaPaMan) as an earthquake simulator. We propose a suitable formulation to simulate the frequency, amplitude and acceleration magnitude of seismic motion by means of the movable platform motion by giving a suitable input motion. In this paper we have reported numerical simulations that simulate the three principal earthquake types for a seismic motion: one at the epicenter (having a vertical motion), another far from the epicenter (with the motion on a horizontal plane), and a combined general motion (with a vertical and horizontal motion).
Resumo:
The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.
Resumo:
The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.