877 resultados para Parallel And Distributed Computing
Resumo:
Simultaneous localization and mapping(SLAM) is a very important problem in mobile robotics. Many solutions have been proposed by different scientists during the last two decades, nevertheless few studies have considered the use of multiple sensors simultane¬ously. The solution is on combining several data sources with the aid of an Extended Kalman Filter (EKF). Two approaches are proposed. The first one is to use the ordinary EKF SLAM algorithm for each data source separately in parallel and then at the end of each step, fuse the results into one solution. Another proposed approach is the use of multiple data sources simultaneously in a single filter. The comparison of the computational com¬plexity of the two methods is also presented. The first method is almost four times faster than the second one.
Resumo:
It is necessary to use highly specialized robots in ITER (International Thermonuclear Experimental Reactor) both in the manufacturing and maintenance of the reactor due to a demanding environment. The sectors of the ITER vacuum vessel (VV) require more stringent tolerances than normally expected for the size of the structure involved. VV consists of nine sectors that are to be welded together. The vacuum vessel has a toroidal chamber structure. The task of the designed robot is to carry the welding apparatus along a path with a stringent tolerance during the assembly operation. In addition to the initial vacuum vessel assembly, after a limited running period, sectors need to be replaced for repair. Mechanisms with closed-loop kinematic chains are used in the design of robots in this work. One version is a purely parallel manipulator and another is a hybrid manipulator where the parallel and serial structures are combined. Traditional industrial robots that generally have the links actuated in series are inherently not very rigid and have poor dynamic performance in high speed and high dynamic loading conditions. Compared with open chain manipulators, parallel manipulators have high stiffness, high accuracy and a high force/torque capacity in a reduced workspace. Parallel manipulators have a mechanical architecture where all of the links are connected to the base and to the end-effector of the robot. The purpose of this thesis is to develop special parallel robots for the assembly, machining and repairing of the VV of the ITER. The process of the assembly and machining of the vacuum vessel needs a special robot. By studying the structure of the vacuum vessel, two novel parallel robots were designed and built; they have six and ten degrees of freedom driven by hydraulic cylinders and electrical servo motors. Kinematic models for the proposed robots were defined and two prototypes built. Experiments for machine cutting and laser welding with the 6-DOF robot were carried out. It was demonstrated that the parallel robots are capable of holding all necessary machining tools and welding end-effectors in all positions accurately and stably inside the vacuum vessel sector. The kinematic models appeared to be complex especially in the case of the 10-DOF robot because of its redundant structure. Multibody dynamics simulations were carried out, ensuring sufficient stiffness during the robot motion. The entire design and testing processes of the robots appeared to be complex tasks due to the high specialization of the manufacturing technology needed in the ITER reactor, while the results demonstrate the applicability of the proposed solutions quite well. The results offer not only devices but also a methodology for the assembly and repair of ITER by means of parallel robots.
Resumo:
The aim of this study is to gain a better understanding of the structure and the deformation history of a NW-SE trending regional, crustal-scale shear structure in the Åland archipelago, SW Finland, called the Sottunga-Jurmo shear zone (SJSZ). Approaches involving e.g. structural geology, geochronology, geochemistry and metamorphic petrology were utilised in order to reconstruct the overall deformation history of the study area. The study therefore describes several features of the shear zone including structures, kinematics and lithologies within the study area, the ages of the different deformation phases (ductile to brittle) within the shear zone, as well as some geothermobarometric results. The results indicate that the SJSZ outlines a major crustal discontinuity between the extensively migmatized rocks NE of the shear zone and the unmigmatised, amphibolite facies rocks SW of the zone. The main SJSZ shows overall dextral lateral kinematics with a SW-side up vertical component and deformation partitioning into pure shear and simple shear dominated deformation styles that was intensified toward later stages of the deformation history. The deformation partitioning resulted in complex folding and refolding against the SW margin of the SJSZ, including conical and sheath folds, and in a formation of several minor strike-slip shear zones both parallel and conjugate to the main SJSZ in order to accommodate the regional transpressive stresses. Different deformation phases within the study area were dated by SIMS (zircon U-Pb), ID-TIMS (titanite U-Pb) and 40Ar/39Ar (pseudotachylyte wholerock) methods. The first deformation phase within the ca. 1.88 Ga rocks of the study area is dated at ca. 1.85 Ga, and the shear zone was reactivated twice within the ductile regime (at ca. 1.83 Ga and 1.79 Ga), during which the strain was successively increasingly partitioned into the main SJSZ and the minor shear zones. The age determinations suggest that the orogenic processes within the study area did not occur in a temporal continuum; instead, the metamorphic zircon rims and titanites show distinct, 10-20 Ma long breaks in deformation between phases of active deformation. The results of this study further imply slow cooling of the rocks through 600-700ºC so that at 1.79 Ga, 2 the temperature was still at least 600ºC. The highest recorded metamorphic pressures are 6.4-7.1 kbar. At the late stages or soon after the last ductile phase (ca. 1.79 Ga), relatively high-T mylonites and ultramylonites were formed, witnessing extreme deformation partitioning and high strain rates. After the rocks reached lower amphibolite facies to amphibolite-greenschist facies transitional conditions (ca. 500-550ºC), they cooled rapidly, probably due to crustal uplift and exhumation. The shear zone was reactivated at least once within the semi-brittle to brittle regime between ca. 1.79 Ga and 1.58 Ga, as evidenced by cataclasites and pseudotachylytes. In summary, the results of this study suggest that the Sottunga-Jurmo shear zone (and the South Finland shear zone) defines a major crustal discontinuity, and played a central role in accommodating the regional stresses during and after the Svecofennian orogeny.
Resumo:
Organizational creativity is increasingly important for organizations aiming to survive and thrive in complex and unexpectedly changing environments. It is precondition of innovation and a driver of an organization’s performance success. Whereas innovation research increasingly promotes high-involvement and participatory innovation, the models of organizational creativity are still mainly based on an individual-creativity view. Likewise, the definitions of organizational creativity and innovation are somewhat equal, and they are used as interchangeable constructs, while on the other hand they are seen as different constructs. Creativity is seen as generation of novel and useful ideas, whereas innovation is seen as the implementation of these ideas. The research streams of innovation and organizational creativity seem to be advancing somewhat separately, although together they could provide many synergy advantages. Thereby, this study addresses three main research gaps. First, as the knowledge and knowing is being increasingly expertized and distributed in organizations, the conceptualization of organizational creativity needs to face that perspective, rather than relying on the individual-creativity view. Thus, the conceptualization of organizational creativity needs clarification, especially as an organizational-level phenomenon (i.e., creativity by an organization). Second, approaches to consciously build organizational creativity to increase the capacity of an organization to demonstrate novelty in its knowledgeable actions are rare. The current creativity techniques are mainly based on individual-creativity views, and they mainly focus on the occasional problem-solving cases among a limited number of individuals, whereas, the development of collective creativity and creativity by the organization lacks approaches. Third, in terms of organizational creativity as a collective phenomenon, the engagement, contributions, and participation of organizational members into activities of common meaning creation are more important than the individualcreativity skills. Therefore, the development approaches to foster creativity as social, emerging, embodied, and collective creativity are needed to complement the current creativity techniques. To address these gaps, the study takes a multiparadigm perspective to face the following three objectives. The first objective of this study is to clarify and extend the conceptualization of organizational creativity. The second is to study the development of organizational creativity. The third is to explore how an improvisational theater based approach fosters organizational creativity. The study consists of two parts comprising the introductory part (part I) and six publications (part II). Each publication addresses the research questions of the thesis through detailed subquestions. The study makes three main contributions to the research of organizational creativity. First, it contributes toward the conceptualization of organizational creativity by extending the current view of organizational creativity. This study views organizational creativity as a multilevel construct constituting both of individual and collective (group and organizational) creativity. In contrast to current views of organizational creativity, this study bases on organizational (collective) knowledge that is based on and demonstrated through the knowledgeable actions of an organization as a whole. The study defines organizational creativity as an overall ability of an organization to demonstrate novelty in its knowledgeable actions (through what it does and how it does what it does).Second, this study contributes toward the development of organizational creativity as multi-level phenomena, introducing developmental approaches that face two or more of these levels simultaneously. More specifically, the study presents the cross-level approaches to building organizational creativity, by using an approach based in improvisational theater and considering assessment of organizational renewal capability. Third, the study contributes on development of organizational creativity using an improvisational theater based approach as twofold meaning. First, it fosters individual and collective creativity simultaneously and builds space for creativity to occur. Second, it models collective and distributed creativity processes, thereby, contributing to the conceptualization of organizational creativity.
Resumo:
The assembly and maintenance of the International Thermonuclear Experimental Reactor (ITER) vacuum vessel (VV) is highly challenging since the tasks performed by the robot involve welding, material handling, and machine cutting from inside the VV. The VV is made of stainless steel, which has poor machinability and tends to work harden very rapidly, and all the machining operations need to be carried out from inside of the ITER VV. A general industrial robot cannot be used due to its poor stiffness in the heavy duty machining process, and this will cause many problems, such as poor surface quality, tool damage, low accuracy. Therefore, one of the most suitable options should be a light weight mobile robot which is able to move around inside of the VV and perform different machining tasks by replacing different cutting tools. Reducing the mass of the robot manipulators offers many advantages: reduced material costs, reduced power consumption, the possibility of using smaller actuators, and a higher payload-to-robot weight ratio. Offsetting these advantages, the lighter weight robot is more flexible, which makes it more difficult to control. To achieve good machining surface quality, the tracking of the end effector must be accurate, and an accurate model for a more flexible robot must be constructed. This thesis studies the dynamics and control of a 10 degree-of-freedom (DOF) redundant hybrid robot (4-DOF serial mechanism and 6-DOF 6-UPS hexapod parallel mechanisms) hydraulically driven with flexible rods under the influence of machining forces. Firstly, the flexibility of the bodies is described using the floating frame of reference method (FFRF). A finite element model (FEM) provided the Craig-Bampton (CB) modes needed for the FFRF. A dynamic model of the system of six closed loop mechanisms was assembled using the constrained Lagrange equations and the Lagrange multiplier method. Subsequently, the reaction forces between the parallel and serial parts were used to study the dynamics of the serial robot. A PID control based on position predictions was implemented independently to control the hydraulic cylinders of the robot. Secondly, in machining, to achieve greater end effector trajectory tracking accuracy for surface quality, a robust control of the actuators for the flexible link has to be deduced. This thesis investigates the intelligent control of a hydraulically driven parallel robot part based on the dynamic model and two schemes of intelligent control for a hydraulically driven parallel mechanism based on the dynamic model: (1) a fuzzy-PID self-tuning controller composed of the conventional PID control and with fuzzy logic, and (2) adaptive neuro-fuzzy inference system-PID (ANFIS-PID) self-tuning of the gains of the PID controller, which are implemented independently to control each hydraulic cylinder of the parallel mechanism based on rod length predictions. The serial component of the hybrid robot can be analyzed using the equilibrium of reaction forces at the universal joint connections of the hexa-element. To achieve precise positional control of the end effector for maximum precision machining, the hydraulic cylinder should be controlled to hold the hexa-element. Thirdly, a finite element approach of multibody systems using the Special Euclidean group SE(3) framework is presented for a parallel mechanism with flexible piston rods under the influence of machining forces. The flexibility of the bodies is described using the nonlinear interpolation method with an exponential map. The equations of motion take the form of a differential algebraic equation on a Lie group, which is solved using a Lie group time integration scheme. The method relies on the local description of motions, so that it provides a singularity-free formulation, and no parameterization of the nodal variables needs to be introduced. The flexible slider constraint is formulated using a Lie group and used for modeling a flexible rod sliding inside a cylinder. The dynamic model of the system of six closed loop mechanisms was assembled using Hamilton’s principle and the Lagrange multiplier method. A linearized hydraulic control system based on rod length predictions was implemented independently to control the hydraulic cylinders. Consequently, the results of the simulations demonstrating the behavior of the robot machine are presented for each case study. In conclusion, this thesis studies the dynamic analysis of a special hybrid (serialparallel) robot for the above-mentioned special task involving the ITER and investigates different control algorithms that can significantly improve machining performance. These analyses and results provide valuable insight into the design and control of the parallel robot with flexible rods.
Resumo:
With a Sales and Operations Planning (S&OP) process, a company aims to manage the demand and supply by planning and forecasting. The studied company uses an integrated S&OP process to improve the company's operations. The aim of this thesis is to develop this business process by finding the best possible way to manage the soft information in S&OP, whilst also understanding the importance and types (assumptions, risks and opportunities) of soft information in S&OP. The soft information in S&OP helps to refine future S&OP planning, taking into account the uncertainties that affect the balance of the long-term demand and supply (typically 12-18 months). The literature review was used to create a framework for soft information management process in S&OP. There were not found a concrete way how to manage soft information in the existing literature. In consequence of the poor literature available the Knowledge Management literature was used as the base for the framework creation, which was seen in the very same type of information management like the soft information management is. The framework created a four-stage process to manage soft information in S&OP that included also the required support systems. First phase is collecting and acquiring soft information in S&OP, which include also categorization. The categorization was the cornerstone to identify different requirements that needs to be taken into consideration when managing soft information in S&OP process. The next phase focus on storing data, which purpose is to ensure the soft information is managed in a common system (support system) in a way that the following phase makes it available to users in S&OP who need by help of sharing and applications process. The last phase target is to use the soft information to understand assumptions and thoughts of users behind the numbers in S&OP plans. With this soft management process the support system will have a key role. The support system, like S&OP tool, ensures that soft information is stored in the right places, kept up-to-date and relevancy. The soft information management process in S&OP strives to improve the relevant soft information documenting behind the S&OP plans into the S&OP support system. The process offers an opportunity to individuals to review, comment and evaluate soft information in S&OP made by their own or others. In the case company it was noticed that without a properly documented and distributed soft information in S&OP it was seen to cause mistrust towards the planning.
Resumo:
Vacuolar H+-ATPase is a large multi-subunit protein that mediates ATP-driven vectorial H+ transport across the membranes. It is widely distributed and present in virtually all eukaryotic cells in intracellular membranes or in the plasma membrane of specialized cells. In subcellular organelles, ATPase is responsible for the acidification of the vesicular interior, which requires an intraorganellar acidic pH to maintain optimal enzyme activity. Control of vacuolar H+-ATPase depends on the potential difference across the membrane in which the proton ATPase is inserted. Since the transport performed by H+-ATPase is electrogenic, translocation of H+-ions across the membranes by the pump creates a lumen-positive voltage in the absence of a neutralizing current, generating an electrochemical potential gradient that limits the activity of H+-ATPase. In many intracellular organelles and cell plasma membranes, this potential difference established by the ATPase gradient is normally dissipated by a parallel and passive Cl- movement, which provides an electric shunt compensating for the positive charge transferred by the pump. The underlying mechanisms for the differences in the requirement for chloride by different tissues have not yet been adequately identified, and there is still some controversy as to the molecular identity of the associated Cl--conducting proteins. Several candidates have been identified: the ClC family members, which may or may not mediate nCl-/H+ exchange, and the cystic fibrosis transmembrane conductance regulator. In this review, we discuss some tissues where the association between H+-ATPase and chloride channels has been demonstrated and plays a relevant physiologic role.
Resumo:
Internet of Things (IoT) technologies are developing rapidly, and therefore there exist several standards of interconnection protocols and platforms. The existence of heterogeneous protocols and platforms has become a critical challenge for IoT system developers. To mitigate this challenge, few alliances and organizations have taken the initiative to build a framework that helps to integrate application silos. Some of these frameworks focus only on a specific domain like home automation. However, the resource constraints in the large proportion of connected devices make it difficult to build an interoperable system using such frameworks. Therefore, a general purpose, lightweight interoperability framework that can be used for a range of devices is required. To tackle the heterogeneous nature, this work introduces an embedded, distributed and lightweight service bus, Lightweight IoT Service bus Architecture (LISA), which fits inside the network stack of a small real-time operating system for constrained nodes. LISA provides a uniform application programming interface for an IoT system on a range of devices with variable resource constraints. It hides platform and protocol variations underneath it, thus facilitating interoperability in IoT implementations. LISA is inspired by the Network on Terminal Architecture, a service centric open architecture by Nokia Research Center. Unlike many other interoperability frameworks, LISA is designed specifically for resource constrained nodes and it provides essential features of a service bus for easy service oriented architecture implementation. The presented architecture utilizes an intermediate computing layer, a Fog layer, between the small nodes and the cloud, thereby facilitating the federation of constrained nodes into subnetworks. As a result of a modular and distributed design, the part of LISA running in the Fog layer handles the heavy lifting to assist the lightweight portion of LISA inside the resource constrained nodes. Furthermore, LISA introduces a new networking paradigm, Node Centric Networking, to route messages across protocol boundaries to facilitate interoperability. This thesis presents a concept implementation of the architecture and creates a foundation for future extension towards a comprehensive interoperability framework for IoT.
Resumo:
Cette thèse s’intéresse aux effets de la conscience historique sur les négociations de l’ethnicité et la structuration des frontières intergroupes chez les enseignants d’histoire nationale au Québec. L’ambiguïté de dominance ethnique entre Francophones et Anglophones contextualise la façon dont les enseignants de ces groupes historicisent les significations du passé pour se connaître et s’orienter « ethniquement. » Selon leurs constructions des réalités intergroupes, ils peuvent promouvoir la compréhension intergroupe ou préserver une coexistence rigide. Le premier article théorise comment les capacités à historiciser le passé, ou à générer des formes de vie morales pour une orientation temporelle, soutiennent la construction de l’ethnicité. En développant un répertoire des tendances de conscience historique parallèles et égales afin de comprendre les fluctuations dans le maintien des frontières ethniques, l’article souligne l’importance de la volonté à reconnaître l’agentivité morale et historique des humains à rendre les frontières plus perméables. Le deuxième article discute d’une étude sur les attitudes intergroupes et les traitements mutuels entre des enseignants d’histoire Francophones et Anglophones. Alors que la plupart des répondants francophones sont indifférents aux réalités sociales et expériences historiques des Anglo-québécois, tous les répondants anglophones en sont conscients et enseignent celles des Franco-québécois. Cette divergence implique une dissemblance dans la manière dont les relations intergroupes passées sont historicisées. La non-reconnaissance de l’agentivité morale et historique des Anglo-québécois peut expliquer l’indifférence des répondants francophones. Le dernier article présente une étude sur la conscience historique des enseignants d’histoire francophone à l’égard des Anglo-québécois. En mettant le répertoire de conscience historique développé à l’épreuve, l’étude se concentre sur la manière dont les répondants historicisent le changement temporel dans leurs négociations de l’ethnicité et leurs structurations des frontières. Tandis que leurs opinions sur l’« histoire » et leurs historicisations des contextes différents les amènent à renforcer des différences ethnoculturelles et à ne pas reconnaître l’agentivité morale et historique de l’Autre, presque la moitié des répondants démontre une ouverture à apprendre et transmettre les réalités et expériences anglo-québécoises. La dépendance sur les visions historiques préétablies pour construire les réalités intergroupes souligne néanmoins l’exclusion de ce dernier groupe dans le développement d’une identité nationale.
Resumo:
The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.
Resumo:
The magnetic properties of BaFe12O19 and BaFe10.2Sn0.74Co0.66O19 single crystals have been investigated in the temperature range (1.8 to 320 K) with a varying field from -5 to +5 T applied parallel and perpendicular to the c axis. Low-temperature magnetic relaxation, which is ascribed to the domain-wall motion, was performed between 1.8 and 15 K. The relaxation of magnetization exhibits a linear dependence on logarithmic time. The magnetic viscosity extracted from the relaxation data, decreases linearly as temperature goes down, which may correspond to the thermal depinning of domain walls. Below 2.5 K, the viscosity begins to deviate from the linear dependence on temperature, tending to be temperature independent. The near temperature independence of viscosity suggests the existence of quantum tunneling of antiferromagnetic domain wall in this temperature range.
Resumo:
Due to the advancement in mobile devices and wireless networks mobile cloud computing, which combines mobile computing and cloud computing has gained momentum since 2009. The characteristics of mobile devices and wireless network makes the implementation of mobile cloud computing more complicated than for fixed clouds. This section lists some of the major issues in Mobile Cloud Computing. One of the key issues in mobile cloud computing is the end to end delay in servicing a request. Data caching is one of the techniques widely used in wired and wireless networks to improve data access efficiency. In this paper we explore the possibility of a cooperative caching approach to enhance data access efficiency in mobile cloud computing. The proposed approach is based on cloudlets, one of the architecture designed for mobile cloud computing.
Resumo:
Die ubiquitäre Datenverarbeitung ist ein attraktives Forschungsgebiet des vergangenen und aktuellen Jahrzehnts. Es handelt von unaufdringlicher Unterstützung von Menschen in ihren alltäglichen Aufgaben durch Rechner. Diese Unterstützung wird durch die Allgegenwärtigkeit von Rechnern ermöglicht die sich spontan zu verteilten Kommunikationsnetzwerken zusammen finden, um Informationen auszutauschen und zu verarbeiten. Umgebende Intelligenz ist eine Anwendung der ubiquitären Datenverarbeitung und eine strategische Forschungsrichtung der Information Society Technology der Europäischen Union. Das Ziel der umbebenden Intelligenz ist komfortableres und sichereres Leben. Verteilte Kommunikationsnetzwerke für die ubiquitäre Datenverarbeitung charakterisieren sich durch Heterogenität der verwendeten Rechner. Diese reichen von Kleinstrechnern, eingebettet in Gegenstände des täglichen Gebrauchs, bis hin zu leistungsfähigen Großrechnern. Die Rechner verbinden sich spontan über kabellose Netzwerktechnologien wie wireless local area networks (WLAN), Bluetooth, oder UMTS. Die Heterogenität verkompliziert die Entwicklung und den Aufbau von verteilten Kommunikationsnetzwerken. Middleware ist eine Software Technologie um Komplexität durch Abstraktion zu einer homogenen Schicht zu reduzieren. Middleware bietet eine einheitliche Sicht auf die durch sie abstrahierten Ressourcen, Funktionalitäten, und Rechner. Verteilte Kommunikationsnetzwerke für die ubiquitäre Datenverarbeitung sind durch die spontane Verbindung von Rechnern gekennzeichnet. Klassische Middleware geht davon aus, dass Rechner dauerhaft miteinander in Kommunikationsbeziehungen stehen. Das Konzept der dienstorienterten Architektur ermöglicht die Entwicklung von Middleware die auch spontane Verbindungen zwischen Rechnern erlaubt. Die Funktionalität von Middleware ist dabei durch Dienste realisiert, die unabhängige Software-Einheiten darstellen. Das Wireless World Research Forum beschreibt Dienste die zukünftige Middleware beinhalten sollte. Diese Dienste werden von einer Ausführungsumgebung beherbergt. Jedoch gibt es noch keine Definitionen wie sich eine solche Ausführungsumgebung ausprägen und welchen Funktionsumfang sie haben muss. Diese Arbeit trägt zu Aspekten der Middleware-Entwicklung für verteilte Kommunikationsnetzwerke in der ubiquitären Datenverarbeitung bei. Der Schwerpunkt liegt auf Middleware und Grundlagentechnologien. Die Beiträge liegen als Konzepte und Ideen für die Entwicklung von Middleware vor. Sie decken die Bereiche Dienstfindung, Dienstaktualisierung, sowie Verträge zwischen Diensten ab. Sie sind in einem Rahmenwerk bereit gestellt, welches auf die Entwicklung von Middleware optimiert ist. Dieses Rahmenwerk, Framework for Applications in Mobile Environments (FAME²) genannt, beinhaltet Richtlinien, eine Definition einer Ausführungsumgebung, sowie Unterstützung für verschiedene Zugriffskontrollmechanismen um Middleware vor unerlaubter Benutzung zu schützen. Das Leistungsspektrum der Ausführungsumgebung von FAME² umfasst: • minimale Ressourcenbenutzung, um auch auf Rechnern mit wenigen Ressourcen, wie z.B. Mobiltelefone und Kleinstrechnern, nutzbar zu sein • Unterstützung für die Anpassung von Middleware durch Änderung der enthaltenen Dienste während die Middleware ausgeführt wird • eine offene Schnittstelle um praktisch jede existierende Lösung für das Finden von Diensten zu verwenden • und eine Möglichkeit der Aktualisierung von Diensten zu deren Laufzeit um damit Fehlerbereinigende, optimierende, und anpassende Wartungsarbeiten an Diensten durchführen zu können Eine begleitende Arbeit ist das Extensible Constraint Framework (ECF), welches Design by Contract (DbC) im Rahmen von FAME² nutzbar macht. DbC ist eine Technologie um Verträge zwischen Diensten zu formulieren und damit die Qualität von Software zu erhöhen. ECF erlaubt das aushandeln sowie die Optimierung von solchen Verträgen.
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
In the Biodiversity World (BDW) project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to predict past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack interoperability. The BDW system brings all these disparate units together so that the user can combine tools with little thought as to their availability, data formats and interoperability. The current Web Servicesbased Grid environment enables execution of the BDW workflow tasks in remote nodes but with a limited scope. The next step in the evolution of the BDW architecture is to enable workflow tasks to utilise computational resources available within and outside the BDW domain. We describe the present BDW architecture and its transition to a new framework which provides a distributed computational environment for mapping and executing workflows in addition to bringing together heterogeneous resources and analytical tools.