963 resultados para architectures


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent trends envisage multi-standard architectures as a promising solution for the future wireless transceivers. The computationally intensive decimation filter plays an important role in channel selection for multi-mode systems. An efficient reconfigurable implementation is a key to achieve low power consumption. To this end, this paper presents a dual-mode Residue Number System (RNS) based decimation filter which can be programmed for WCDMA and 802.11a standards. Decimation is done using multistage, multirate finite impulse response (FIR) filters. These FIR filters implemented in RNS domain offers high speed because of its carry free operation on smaller residues in parallel channels. Also, the FIR filters exhibit programmability to a selected standard by reconfiguring the hardware architecture. The total area is increased only by 33% to include WLANa compared to a single mode WCDMA transceiver. In each mode, the unused parts of the overall architecture is powered down and bypassed to attain power saving. The performance of the proposed decimation filter in terms of critical path delay and area are tabulated

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The creation of three-dimensionally engineered nanoporous architectures via covalently interconnected nanoscale building blocks remains one of the fundamental challenges in nanotechnology. Here we report the synthesis of ordered, stacked macroscopic three-dimensional (3D) solid scaffolds of graphene oxide (GO) fabricated via chemical cross-linking of two-dimensional GO building blocks. The resulting 3D GO network solids form highly porous interconnected structures, and the controlled reduction of these structures leads to formation of 3D conductive graphene scaffolds. These 3D architectures show promise for potential applications such as gas storage; CO2 gas adsorption measurements carried out under ambient conditions show high sorption capacity, demonstrating the possibility of creating new functional carbon solids starting with two-dimensional carbon layers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of self-assembly as a strategy for the synthesis has been confined largely to molecules, because of the importance of manipulating the structure of matter at the molecular scale. We have investigated the influence of temperature and pH, in addition to the concentration of the capping agent used for the formation of the nano-bio conjugates. For example, the formation of the narrower size distribution of the nanoparticles was observed with the increase in the concentration of the protein, which supports the fact that γ-globulin acts both as a controller of nucleation as well as stabiliser. As analyzed through various photophysical, biophysical and microscopic techniques such as TEM, AFM, C-AFM, SEM, DLS, OPM, CD and FTIR, we observed that the initial photoactivation of γ-globulin at pH 12 for 3 h resulted in small protein fibres of ca. Further irradiation for 24 h, led to the formation of selfassembled long fibres of the protein of ca. 5-6 nm and observation of surface plasmon resonance band at around 520 nm with the concomitant quenching of luminescence intensity at 680 nm. The observation of light triggered self-assembly of the protein and its effect on controlling the fate of the anchored nanoparticles can be compared with the naturally occurring process such as photomorphogenesis.Furthermore,our approach offers a way to understand the role played by the self-assembly of the protein in ordering and knock out of the metal nanoparticles and also in the design of nano-biohybrid materials for medicinal and optoelectronic applications. Investigation of the potential applications of NIR absorbing and water soluble squaraine dyes 1-3 for protein labeling and anti-amyloid agents forms the subject matter of the third chapter of the thesis. The study of their interactions with various proteins revealed that 1-3 showed unique interactions towards serum albumins as well as lysozyme. 69%, 71% and 49% in the absorption spectra as well as significant quenching in the fluorescence intensity of the dyes 1-3, respectively. Half-reciprocal analysis of the absorption data and isothermal titration calorimetric (ITC) analysis of the titration experiments gave a 1:1 stoichiometry for the complexes formed between the lysozyme and squaraine dyes with association constants (Kass) in the range 104-105 M-1. We have determined the changes in the free energy (ΔG) for the complex formation and the values are found to be -30.78, -32.31 and -28.58 kJmol-1, respectively for the dyes 1, 2 and 3. Furthermore, we have observed a strong induced CD (ICD) signal corresponding to the squaraine chromophore in the case of the halogenated squaraine dyes 2 and 3 at 636 and 637 nm confirming the complex formation in these cases. To understand the nature of interaction of the squaraine dyes 1-3 with lysozyme, we have investigated the interaction of dyes 1-3 with different amino acids. These results indicated that the dyes 1-3 showed significant interactions with cysteine and glutamic acid which are present in the side chains of lysozyme. In addition the temperature dependent studies have revealed that the interaction of the dye and the lysozyme are irreversible. Furthermore, we have investigated the interactions of these NIR dyes 1-3 with β- amyloid fibres derived from lysozyme to evaluate their potential as inhibitors of this biologically important protein aggregation. These β-amyloid fibrils were insoluble protein aggregates that have been associated with a range of neurodegenerative diseases, including Huntington, Alzheimer’s, Parkinson’s, and Creutzfeldt-Jakob diseases. We have synthesized amyloid fibres from lysozyme through its incubation in acidic solution below pH 4 and by allowing to form amyloid fibres at elevated temperature. To quantify the binding affinities of the squaraine dyes 1-3 with β-amyloids, we have carried out the isothermal titration calorimetric (ITC) measurements. The association constants were determined and are found to be 1.2 × 105, 3.6× 105 and 3.2 × 105 M-1 for the dyes, 1-3, respectively. To gain more insights into the amyloid inhibiting nature of the squaraine dyes under investigations, we have carried out thioflavin assay, CD, isothermal titration calorimetry and microscopic analysis. The addition of the dyes 1-3 (5μM) led to the complete quenching in the apparent thioflavin fluorescence, thereby indicating the destabilization of β-amyloid fibres in the presence of the squaraine dyes. Further, the inhibition of the amyloid fibres by the squaraine dyes 1-3, has been evidenced though the DLS, TEM AFM and SAED, wherein we observed the complete destabilization of the amyloid fibre and transformation of the fibre into spherical particles of ca. These results demonstrate the fact that the squaraine dyes 1-3 can act as protein labeling agents as well as the inhibitors of the protein amyloidogenesis. The last chapter of the thesis describes the synthesis and investigation of selfassembly as well as bio-imaging aspects of a few novel tetraphenylethene conjugates 4-6.Expectedly, these conjugates showed significant solvatochromism and exhibited a hypsochromic shift (negative solvatochromism) as the solvent polarity increased, and these observations were justified though theoretical studies employing the B3LYP/6-31g method. We have investigated the self-assembly properties of these D-A conjugates though variation in the percentage of water in acetonitrile solution due to the formation of nanoaggregates. Further the contour map of the observed fluorescence intensity as a function of the fluorescence excitation and emission wavelength confirmed the formation of J-type aggregates in these cases. To have a better understanding of the type of self-assemblies formed from the TPE conjugates 4-6, we have carried out the morphological analysis through various microscopic techniques such as DLS, SEM and TEM. 70%, we observed rod shape architectures having ~ 780 nm in diameter and ~ 12 μM in length as evidenced through TEM and SEM analysis. We have made similar observations with the dodecyl conjugate 5 at ca. 70% and 50% water/acetonitrile mixtures, the aggregates formed from 4 and 5 were found to be highly crystalline and such structures were transformed to amorphous nature as the water fraction was increased to 99%. To evaluate the potential of the conjugate as bio-imaging agents, we have carried out their in vitro cytotoxicity and cellular uptake studies though MTT assay, flow cytometric and confocal laser scanning microscopic techniques. Thus nanoparticle of these conjugates which exhibited efficient emission, large stoke shift, good stability, biocompatibility and excellent cellular imaging properties can have potential applications for tracking cells as well as in cell-based therapies. In summary we have synthesized novel functional organic chromophores and have studied systematic investigation of self-assembly of these synthetic and biological building blocks under a variety of conditions. The investigation of interaction of water soluble NIR squaraine dyes with lysozyme indicates that these dyes can act as the protein labeling agents and the efficiency of inhibition of β-amyloid indicate, thereby their potential as anti-amyloid agents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Technologie dienstorientierter Architekturen (Service-oriented Architectures, kurz SOA) weckt große Visionen auf Seiten der Industrie wie auch der Forschung. Sie hat sich als derzeit ideale Lösung für Umgebungen, in denen sich die Anforderungen an die IT-Bedürfnisse rapide ändern, erwiesen. Heutige IT-Systeme müssen Managementaufgaben wie Softwareinstallation, -anpassung oder -austausch erlauben, ohne dabei den laufenden Betrieb wesentlich zu stören. Die dafür nötige Flexibilität bieten dienstorientierte Architekturen, in denen Softwarekomponenten in Form von Diensten zur Verfügung stehen. Ein Dienst bietet über seine Schnittstelle lokalen wie entfernten Applikationen einen Zugang zu seiner Funktionalität. Wir betrachten im Folgenden nur solche dienstorientierte Architekturen, in denen Dienste zur Laufzeit dynamisch entdeckt, gebunden, komponiert, verhandelt und adaptiert werden können. Eine Applikation kann mit unterschiedlichen Diensten arbeiten, wenn beispielsweise Dienste ausfallen oder ein neuer Dienst die Anforderungen der Applikation besser erfüllt. Eine unserer Grundvoraussetzungen lautet somit, dass sowohl das Dienstangebot als auch die Nachfrageseite variabel sind. Dienstorientierte Architekturen haben besonderes Gewicht in der Implementierung von Geschäftsprozessen. Im Rahmen des Paradigmas Enterprise Integration Architecture werden einzelne Arbeitsschritte als Dienste implementiert und ein Geschäftsprozess als Workflow von Diensten ausgeführt. Eine solche Dienstkomposition wird auch Orchestration genannt. Insbesondere für die so genannte B2B-Integration (Business-to-Business) sind Dienste das probate Mittel, um die Kommunikation über die Unternehmensgrenzen hinaus zu unterstützen. Dienste werden hier in der Regel als Web Services realisiert, welche vermöge BPEL4WS orchestriert werden. Der XML-basierte Nachrichtenverkehr und das http-Protokoll sorgen für eine Verträglichkeit zwischen heterogenen Systemen und eine Transparenz des Nachrichtenverkehrs. Anbieter dieser Dienste versprechen sich einen hohen Nutzen durch ihre öffentlichen Dienste. Zum einen hofft man auf eine vermehrte Einbindung ihrer Dienste in Softwareprozesse. Zum anderen setzt man auf das Entwickeln neuer Software auf Basis ihrer Dienste. In der Zukunft werden hunderte solcher Dienste verfügbar sein und es wird schwer für den Entwickler passende Dienstangebote zu finden. Das Projekt ADDO hat in diesem Umfeld wichtige Ergebnisse erzielt. Im Laufe des Projektes wurde erreicht, dass der Einsatz semantischer Spezifikationen es ermöglicht, Dienste sowohl im Hinblick auf ihre funktionalen als auch ihre nicht-funktionalen Eigenschaften, insbesondere die Dienstgüte, automatisch zu sichten und an Dienstaggregate zu binden [15]. Dazu wurden Ontologie-Schemata [10, 16], Abgleichalgorithmen [16, 9] und Werkzeuge entwickelt und als Framework implementiert [16]. Der in diesem Rahmen entwickelte Abgleichalgorithmus für Dienstgüte beherrscht die automatische Aushandlung von Verträgen für die Dienstnutzung, um etwa kostenpflichtige Dienste zur Dienstnutzung einzubinden. ADDO liefert einen Ansatz, Schablonen für Dienstaggregate in BPEL4WS zu erstellen, die zur Laufzeit automatisch verwaltet werden. Das Vorgehen konnte seine Effektivität beim internationalen Wettbewerb Web Service Challenge 2006 in San Francisco unter Beweis stellen: Der für ADDO entwickelte Algorithmus zur semantischen Dienstkomposition erreichte den ersten Platz. Der Algorithmus erlaubt es, unter einer sehr großenMenge angebotener Dienste eine geeignete Auswahl zu treffen, diese Dienste zu Dienstaggregaten zusammenzufassen und damit die Funktionalität eines vorgegebenen gesuchten Dienstes zu leisten. Weitere Ergebnisse des Projektes ADDO wurden auf internationalen Workshops und Konferenzen veröffentlicht. [12, 11]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software Defined Radio (SDR) hardware platforms use parallel architectures. Current concepts of developing applications (such as WLAN) for these platforms are complex, because developers describe an application with hardware-specifics that are relevant to parallelism such as mapping and scheduling. To reduce this complexity, we have developed a new programming approach for SDR applications, called Virtual Radio Engine (VRE). VRE defines a language for describing applications, and a tool chain that consists of a compiler kernel and other tools (such as a code generator) to generate executables. The thesis presents this concept, as well as describes the language and the compiler kernel that have been developed by the author. The language is hardware-independent, i.e., developers describe tasks and dependencies between them. The compiler kernel performs automatic parallelization, i.e., it is capable of transforming a hardware-independent program into a hardware-specific program by solving hardware-specifics, in particular mapping, scheduling and synchronizations. Thus, VRE simplifies programming tasks as developers do not solve hardware-specifics manually.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Bedeutung des Dienstgüte-Managements (SLM) im Bereich von Unternehmensanwendungen steigt mit der zunehmenden Kritikalität von IT-gestützten Prozessen für den Erfolg einzelner Unternehmen. Traditionell werden zur Implementierung eines wirksamen SLMs Monitoringprozesse in hierarchischen Managementumgebungen etabliert, die einen Administrator bei der notwendigen Rekonfiguration von Systemen unterstützen. Auf aktuelle, hochdynamische Softwarearchitekturen sind diese hierarchischen Ansätze jedoch nur sehr eingeschränkt anwendbar. Ein Beispiel dafür sind dienstorientierte Architekturen (SOA), bei denen die Geschäftsfunktionalität durch das Zusammenspiel einzelner, voneinander unabhängiger Dienste auf Basis deskriptiver Workflow-Beschreibungen modelliert wird. Dadurch ergibt sich eine hohe Laufzeitdynamik der gesamten Architektur. Für das SLM ist insbesondere die dezentrale Struktur einer SOA mit unterschiedlichen administrativen Zuständigkeiten für einzelne Teilsysteme problematisch, da regelnde Eingriffe zum einen durch die Kapselung der Implementierung einzelner Dienste und zum anderen durch das Fehlen einer zentralen Kontrollinstanz nur sehr eingeschränkt möglich sind. Die vorliegende Arbeit definiert die Architektur eines SLM-Systems für SOA-Umgebungen, in dem autonome Management-Komponenten kooperieren, um übergeordnete Dienstgüteziele zu erfüllen: Mithilfe von Selbst-Management-Technologien wird zunächst eine Automatisierung des Dienstgüte-Managements auf Ebene einzelner Dienste erreicht. Die autonomen Management-Komponenten dieser Dienste können dann mithilfe von Selbstorganisationsmechanismen übergreifende Ziele zur Optimierung von Dienstgüteverhalten und Ressourcennutzung verfolgen. Für das SLM auf Ebene von SOA Workflows müssen temporär dienstübergreifende Kooperationen zur Erfüllung von Dienstgüteanforderungen etabliert werden, die sich damit auch über mehrere administrative Domänen erstrecken können. Eine solche zeitlich begrenzte Kooperation autonomer Teilsysteme kann sinnvoll nur dezentral erfolgen, da die jeweiligen Kooperationspartner im Vorfeld nicht bekannt sind und – je nach Lebensdauer einzelner Workflows – zur Laufzeit beteiligte Komponenten ausgetauscht werden können. In der Arbeit wird ein Verfahren zur Koordination autonomer Management-Komponenten mit dem Ziel der Optimierung von Antwortzeiten auf Workflow-Ebene entwickelt: Management-Komponenten können durch Übertragung von Antwortzeitanteilen untereinander ihre individuellen Ziele straffen oder lockern, ohne dass das Gesamtantwortzeitziel dadurch verändert wird. Die Übertragung von Antwortzeitanteilen wird mithilfe eines Auktionsverfahrens realisiert. Technische Grundlage der Kooperation bildet ein Gruppenkommunikationsmechanismus. Weiterhin werden in Bezug auf die Nutzung geteilter, virtualisierter Ressourcen konkurrierende Dienste entsprechend geschäftlicher Ziele priorisiert. Im Rahmen der praktischen Umsetzung wird die Realisierung zentraler Architekturelemente und der entwickelten Verfahren zur Selbstorganisation beispielhaft für das SLM konkreter Komponenten vorgestellt. Zur Untersuchung der Management-Kooperation in größeren Szenarien wird ein hybrider Simulationsansatz verwendet. Im Rahmen der Evaluation werden Untersuchungen zur Skalierbarkeit des Ansatzes durchgeführt. Schwerpunkt ist hierbei die Betrachtung eines Systems aus kooperierenden Management-Komponenten, insbesondere im Hinblick auf den Kommunikationsaufwand. Die Evaluation zeigt, dass ein dienstübergreifendes, autonomes Performance-Management in SOA-Umgebungen möglich ist. Die Ergebnisse legen nahe, dass der entwickelte Ansatz auch in großen Umgebungen erfolgreich angewendet werden kann.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Enterprise Modeling (EM) is currently in operation either as a technique to represent and understand the structure and behavior of the enterprise, or as a technique to analyze business processes, and in many cases as support technique for business process reengineering. However, EM architectures and methods for Enterprise Engineering can also used to support new management techniques like SIX SIGMA, because these new techniques need a clear, transparent and integrated definition and description of the business activities of the enterprise to be able to build up, optimize and operate an successful enterprise. The main goal of SIX SIGMA is to optimize the performance of processes. A still open question is: "What are the adequate Quality criteria and methods to ensure such performance? What must we do to get Quality governance?" This paper describes a method including an Enterprise Engineering method and SIX SIGMA strategy to reach Quality Governance

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Enterprise Modeling (EM) is currently in operation either as a technique to represent and understand the structure and behavior of the enterprise, or as a technique to analyze business processes, and in many cases as support technique for business process reengineering. However, EM architectures and methodes for Enterprise Engineering can also used to support new management techniques like SIX SIGMA, because these new techniques need a clear, transparent and integrated definition and description of the business activities of the enterprise to be able to build up, to optimize and to operate an successful enterprise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Scheme86 and the HP Precision Architectures represent different trends in computer processor design. The former uses wide micro-instructions, parallel hardware, and a low latency memory interface. The latter encourages pipelined implementation and visible interlocks. To compare the merits of these approaches, algorithms frequently encountered in numerical and symbolic computation were hand-coded for each architecture. Timings were done in simulators and the results were evaluated to determine the speed of each design. Based on these measurements, conclusions were drawn as to which aspects of each architecture are suitable for a high- performance computer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the development of high-level languages for new computer architectures comes the need for appropriate debugging tools as well. One method for meeting this need would be to develop, from scratch, a symbolic debugger with the introduction of each new language implementation for any given architecture. This, however, seems to require unnecessary duplication of effort among developers. This paper describes Maygen, a "debugger generation system," designed to efficiently provide the desired language-dependent and architecture-dependent debuggers. A prototype of the Maygen system has been implemented and is able to handle the semantically different languages of C and OPAL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report addresses the problem of achieving cooperation within small- to medium- sized teams of heterogeneous mobile robots. I describe a software architecture I have developed, called ALLIANCE, that facilitates robust, fault tolerant, reliable, and adaptive cooperative control. In addition, an extended version of ALLIANCE, called L-ALLIANCE, is described, which incorporates a dynamic parameter update mechanism that allows teams of mobile robots to improve the efficiency of their mission performance through learning. A number of experimental results of implementing these architectures on both physical and simulated mobile robot teams are described. In addition, this report presents the results of studies of a number of issues in mobile robot cooperation, including fault tolerant cooperative control, adaptive action selection, distributed control, robot awareness of team member actions, improving efficiency through learning, inter-robot communication, action recognition, and local versus global control.