37 resultados para Parallel and distributed systems

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis is to define and validate a software engineering approach for the development of a distributed system for the modeling of composite materials, based on the analysis of various existing software development methods. We reviewed the main features of: (1) software engineering methodologies; (2) distributed system characteristics and their effect on software development; (3) composite materials modeling activities and the requirements for the software development. Using the design science as a research methodology, the distributed system for creating models of composite materials is created and evaluated. Empirical experiments which we conducted showed good convergence of modeled and real processes. During the study, we paid attention to the matter of complexity and importance of distributed system and a deep understanding of modern software engineering methods and tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In modem hitec industry Advanced Planning and Scheduling (APS) systems provide the basis for e-business solutions towards the suppliers and the customers. One objective of this thesis was to clarify the modem supply chain management with the APS systems and especially concentrate on the area of Collaborative Planning. In order Advanced Planning and Scheduling systems to be complete and usable, user interfaces are needed. Current Visual Basic user interfaces have faced many complaints and arguments from the users as well as from the development team. This thesis is trying to analyze the reasons and causes for the encountered problems and also provide ways to overcome them. The decision has been made to build the new user interfaces to be Web-enabled. Therefore another objective of this thesis was to research and find suitable technologies for building the Web-based user interfaces for Advanced Planning and Scheduling Systems in Nokia Demand/Supply Planning business area. Comparison between the most suitable technologies is made. Usability issues of Web-enabled user interfaces are also covered. The empirical part of the thesis includes design and implementation of a Web-based user interface with the chosen technology for a particular APS module that enables Collaborative Planning with suppliers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämä diplomityö tutkii elektroniikka- ja telekommunikaatioteollisuutta sekä siihen läheisesti liittyviä robotteja ja robottijärjestelmiä. Tavoitteena on määrittää E&T-teollisuuden prosesseihin soveltuvien robottien testausmenetelmä. Tavoitteena on myös selvittää kahden ABB:n robotin soveltuvuutta E&T-teollisuuden tarpeisiin. Muutamia systemaattisia valmistusjärjestelmien suunnitteluun soveltuvia menetelmiä ja apuvälineitä on myös käsitelty. Alussa työ keskittyy elektroniikka- ja telekommunikaatioteollisuuden nykytilan tutkimiseen sekä siellä vallitsevien ja ennustettujen trendien kartoitukseen. Kohdat “Collaborative manufacturing” ja E&T-teollisuuden valmistusjärjestelmille asettamat vaatimukset käydään yksityiskohtaisesti läpi. Tutkimuksen pääkohteina ovat robotit, erityisesti ABB:n IRB 140 ja IRB 340 sekä robottien testausmenetelmän määrittäminen. Työssä käydään läpi IRB 340:llä suoritetut testit, jotka tehtiin sekä konenäköjärjestelmää apuna käyttäen että ilman. Myös TTKK:lla suoritetut robottitestit on käyty läpi. Robottien testituloksia on analysoitu ja vertailtu muihin robotteihin. Testausmenetelmät perustuvat ISO 9283 standardiin. Viimeinen osa työstä esittelee robottijärjestelmien systemaattiseen suunnitteluun soveltuvia menetelmiä ja apuvälineitä. Esillä ovat mm. Modular function deployment (MFD) ja The system design method (SDM).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityön tarkoituksena oli kuidutusrumpulaitteiston käytön- ja kannatuksen kehittä-minen. Työ rajattiin laajuutensa vuoksi koskemaan tuotesarjan viittä pienintä kokoa. Työn alkuosassa käsitellään kuidutuksen teoriaa ja siihen soveltuvia laitteistoja. Käytön suunnittelun kannalta olennaista käynnistystehon tarvetta on tarkasteltu lähtökohdaisesti fysiikan avulla. Perustietoja teorialle on haettu aiemmista tutkimuksista sekä kirjallisuu-desta. Tarkastelun tuloksena teoriaa on kehitty ja se on saatu vastaamaan todellisuutta aiempaa paremmin. Kannatuksen ja käytön toteuttamisvaihtoja etsittäessä on käytetty systemaattisen koneen-suunnittelun keinoja. Saatuja ideoita on arvioitu teknillis-taloudellisin perustein ja näistä on valittu parhaat vaihtoehdot jatkokehitykseen. Jatkokehitysvaiheessa ratkaisuvaihto-ehtoja on tarkasteltu komponenttitasolla ja näistä on tehty yksityiskohtaiset kustannus-laskelmat. Työn tuloksena on esitetty kannatuksen ja käytön toteutusvaihtoehto, jonka avulla voidaan saavuttaa merkittäviä kustannussäästöjä. Korkea, 30 prosentin kustannussäästö-tavoite saavutettiin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this master´s thesis is to study which processes increase the auxiliary power consumption in carbon capture and storage processes and if it is possible to reduce the auxiliary power consumption with variable speed drives. Also the cost of carbon capture and storage is studied. Data about auxiliary power consumption in carbon capture is gathered from various studies and estimates made by various research centres. Based on these studies a view is presented how the power auxiliary power consumption is divided between different processes in carbon capture processes. In a literary study, the operation of three basic carbon capture systems is described. Also different methods to transport carbon dioxide and carbon dioxide storage options are described in this section. At the end of the thesis processes that consume most of the auxiliary power are defined and possibilities to reduce the auxiliary power consumption are evaluated. Cost of carbon capture, transport and storage are also evaluated at this point and in the case that the carbon capture and storage systems are fully deployed. According to the results, it can be estimated what are the processes are where variable speed drives can be used and what kind of cost and power consumption reduction could be achieved. Results also show how large a project carbon capture and storage is if it is fully deployed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The bioavailability of metals and their potential for environmental pollution depends not simply on total concentrations, but is to a great extent determined by their chemical form. Consequently, knowledge of aqueous metal species is essential in investigating potential metal toxicity and mobility. The overall aim of this thesis is, thus, to determine the species of major and trace elements and the size distribution among the different forms (e.g. ions, molecules and mineral particles) in selected metal-enriched Boreal river and estuarine systems by utilising filtration techniques and geochemical modelling. On the basis of the spatial physicochemical patterns found, the fractionation and complexation processes of elements (mainly related to input of humic matter and pH-change) were examined. Dissolved (<1 kDa), colloidal (1 kDa-0.45 μm) and particulate (>0.45 μm) size fractions of sulfate, organic carbon (OC) and 44 metals/metalloids were investigated in the extremely acidic Vörå River system and its estuary in W Finland, and in four river systems in SW Finland (Sirppujoki, Laajoki, Mynäjoki and Paimionjoki), largely affected by soil erosion and acid sulfate (AS) soils. In addition, geochemical modelling was used to predict the formation of free ions and complexes in these investigated waters. One of the most important findings of this study is that the very large amounts of metals known to be released from AS soils (including Al, Ca, Cd, Co, Cu, Mg, Mn, Na, Ni, Si, U and the lanthanoids) occur and can prevail mainly in toxic forms throughout acidic river systems; as free ions and/or sulfate-complexes. This has serious effects on the biota and especially dissolved Al is expected to have acute effects on fish and other organisms, but also other potentially toxic dissolved elements (e.g. Cd, Cu, Mn and Ni) can have fatal effects on the biota in these environments. In upstream areas that are generally relatively forested (higher pH and contents of OC) fewer bioavailable elements (including Al, Cu, Ni and U) may be found due to complexation with the more abundantly occurring colloidal OC. In the rivers in SW Finland total metal concentrations were relatively high, but most of the elements occurred largely in a colloidal or particulate form and even elements expected to be very soluble (Ca, K, Mg, Na and Sr) occurred to a large extent in colloidal form. According to geochemical modelling, these patterns may only to a limited extent be explained by in-stream metal complexation/adsorption. Instead there were strong indications that the high metal concentrations and dominant solid fractions were largely caused by erosion of metal bearing phyllosilicates. A strong influence of AS soils, known to exist in the catchment, could be clearly distinguished in the Sirppujoki River as it had very high concentrations of a metal sequence typical of AS soils in a dissolved form (Ba, Br, Ca, Cd, Co, K, Mg, Mn, Na, Ni, Rb and Sr). In the Paimionjoki River, metal concentrations (including Ba, Cs, Fe, Hf, Pb, Rb, Si, Th, Ti, Tl and V; not typical of AS soils in the area) were high, but it was found that the main cause of this was erosion of metal bearing phyllosilicates and thus these metals occurred dominantly in less toxic colloidal and particulate fractions. In the two nearby rivers (Laajoki and Mynäjoki) there was influence of AS soils, but it was largely masked by eroded phyllosilicates. Consequently, rivers draining clay plains sensitive to erosion, like those in SW Finland, have generally high background metal concentrations due to erosion. Thus, relying on only semi-dissolved (<0.45 μm) concentrations obtained in routine monitoring, or geochemical modelling based on such data, can lead to a great overestimation of the water toxicity in this environment. The potentially toxic elements that are of concern in AS soil areas will ultimately be precipitated in the recipient estuary or sea, where the acidic metalrich river water will gradually be diluted/neutralised with brackish seawater. Along such a rising pH gradient Al, Cu and U will precipitate first together with organic matter closest to the river mouth. Manganese is relatively persistent in solution and, thus, precipitates further down the estuary as Mn oxides together with elements such as Ba, Cd, Co, Cu and Ni. Iron oxides, on the contrary, are not important scavengers of metals in the estuary, they are predicted to be associated only with As and PO4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The necessity of EC (Electronic Commerce) and enterprise systems integration is perceived from the integrated nature of enterprise systems. The proven benefits of EC to provide competitive advantages to the organizations force enterprises to adopt and integrate EC with their enterprise systems. Integration is a complex task to facilitate seamless flow of information and data between different systems within and across enterprises. Different systems have different platforms, thus to integrate systems with different platforms and infrastructures, integration technologies, such as middleware, SOA (Service-Oriented Architecture), ESB (Enterprise Service Bus), JCA (J2EE Connector Architecture), and B2B (Business-to-Business) integration standards are required. Huge software vendors, such as Oracle, IBM, Microsoft, and SAP suggest various solutions to address EC and enterprise systems integration problems. There are limited numbers of literature about the integration of EC and enterprise systems in detail. Most of the studies in this area have focused on the factors which influence the adoption of EC by enterprise or other studies provide limited information about a specific platform or integration methodology in general. Therefore, this thesis is conducted to cover the technical details of EC and enterprise systems integration and covers both the adoption factors and integration solutions. In this study, many literature was reviewed and different solutions were investigated. Different enterprise integration approaches as well as most popular integration technologies were investigated. Moreover, various methodologies of integrating EC and enterprise systems were studied in detail and different solutions were examined. In this study, the influential factors to adopt EC in enterprises were studied based on previous literature and categorized to technical, social, managerial, financial, and human resource factors. Moreover, integration technologies were categorized based on three levels of integration, which are data, application, and process. In addition, different integration approaches were identified and categorized based on their communication and platform. Also, different EC integration solutions were investigated and categorized based on the identified integration approaches. By considering different aspects of integration, this study is a great asset to the architectures, developers, and system integrators in order to integrate and adopt EC with enterprise systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Laboratory of Intelligent Machine researches and develops energy-efficient power transmissions and automation for mobile construction machines and industrial processes. The laboratory's particular areas of expertise include mechatronic machine design using virtual technologies and simulators and demanding industrial robotics. The laboratory has collaborated extensively with industrial actors and it has participated in significant international research projects, particularly in the field of robotics. For years, dSPACE tools were the lonely hardware which was used in the lab to develop different control algorithms in real-time. dSPACE's hardware systems are in widespread use in the automotive industry and are also employed in drives, aerospace, and industrial automation. But new competitors are developing new sophisticated systems and their features convinced the laboratory to test new products. One of these competitors is National Instrument (NI). In order to get to know the specifications and capabilities of NI tools, an agreement was made to test a NI evolutionary system. This system is used to control a 1-D hydraulic slider. The objective of this research project is to develop a control scheme for the teleoperation of a hydraulically driven manipulator, and to implement a control algorithm between human and machine interaction, and machine and task environment interaction both on NI and dSPACE systems simultaneously and to compare the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mammalian spermatozoa gain their fertilizing ability during maturation in the epididymis. Proteins and lipids secreted into the epididymal lumen remodel the sperm membrane, thereby providing the structure necessary for progressive motility and oocyte interaction. In the current study, genetically modified mouse models were utilized to determine the role of novel genes and regulatory systems in the postnatal development and function of the epididymis. Ablation of the mouse β-defensin, Defb41, altered the flagellar movements of sperm and reduced the ability of sperm to bind to the oocyte in vitro. The Defb41-deficient iCre knock-in mouse model was furthermore utilized to generate Dicer1 conditional knock-out (cKO) mice. DICER1 is required for production of mature microRNAs in the regulation of gene expression by RNA interference. Dicer1 cKO gave rise to dedifferentiation of the epididymal epithelium and an altered expression of genes involved in lipid synthesis. As a consequence, the cholesterol:polyunsaturated fatty acid ratio of the Dicer1 cKO sperm membrane was increased, which resulted in membrane instability and infertility. In conclusion, the results of the Defb41 study further support the important role of β-defensin family members in sperm maturation. The regulatory role of Dicer1 was also shown to be required for epididymal development. In addition, the study is the first to show a clear connection between lipid homeostasis in the epididymis and sperm membrane integrity. Taken together, the results give important new evidence on the regulatory system guiding epididymal development and function

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.